venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Spectral Nonlocal Block for Neural Network
Abstract
The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.
1 INTRODUCTION
Capturing the long-range spatial-temporal dependencies is crucial for the Deep Convolutional Neural Networks (CNNs) to extract discriminate features in vision tasks such as image and video classification. However, the traditional convolution operator only focuses on processing local neighborhood at a time. This makes the CNNs need to go deeper with convolutional operations to enlarge the receptive fields, which lead to higher computation and memory. Moreover, going deeper cannot always increase the effective receptive fields due to the Gaussian distribution of the kernel weight (Luo et al. (2016)). To eliminate this limitation, some recent works focus on designing the network architecture with wider and well-designed modules to catch the long-range dependencies such as (Peng et al. (2017), Chen et al. (2017), Zhao et al. (2017)). Although having larger receptive fields, these modules still need to be applied recursively to catch the dependencies of the pairs in large distances.
Inspired by the classical non-local means method in image denoising, Wang et al. (2018) proposes the nonlocal neural network which uses the nonlocal (NL) block to concern the “full-range” dependencies in only one module by exploring the correlations between each position and all other positions. In the NL block, the affinity matrix is first computed to represent the correlations between each position pair. Then the weight means of features are calculated based on the affinity matrix to refine the feature representation. Finally, the residual connection is added to the refined feature map. Due to its simplicity and effectiveness, the nonlocal block has been widely used in image and video classification (Wang et al. (2018); Yue et al. (2018); Tao et al. (2018); Chen et al. (2018)), image segmentation (Huang et al. (2018); Yue et al. (2018); Wang et al. (2018)) and person re-identification (Liao et al. (2018); Zhang et al. (2019)) recently.
However, due to the complexity of the affinity matrix, the nonlocal block 1 needs much more computational effort and is sensitive to its number and position in the neural network (Tao et al. (2018)). Some works solve the first problem by simplifying the calculation of the affinity matrix such as Huang et al. (2018), He et al. (2019), Yue et al. (2018), Chen et al. (2018). Only a few works try to solve the second problem which limits the robustness of the nonlocal network 2. Tao et al. (2018)
1The nonlocal block is composed of a nonlocal operator and a residual connection 2The nonlocal network is composed of several nonlocal blocks
proposes the nonlocal stage (NS) block which concerns the diffusion nature and maintains the same affinity matrix for all the nonlocal units in the NS block. Comparing with the NL block, the NS block is insensitive to the numbers and allows deeper nonlocal structure. However, the deeper nonlocal structure of NS block increases the complexity and do not have a remarkable improvement.
In this work, we focus on elaborating a robust nonlocal block which is more flexible when using in the neural network. We prove that the nonlocal operator in the nonlocal block is equivalent to the Chebyshev-approximated fully-connected graph filter with irrational constraints that limits its liberty for learning. To remove these irrational constraints, we propose the Spectral-based Nonlocal (SNL) block which is more robust and can degrade into the NL and NS with specific assumptions. We also prove that the deeper nonlocal structure satisfies the stable hypothesis with the help of steadystate analysis. Based on this hypothesis, we give the full-order approximated spectral nonlocal (gSNL) block which is well-performed for deeper nonlocal structure. Finally, we add our proposed nonlocal blocks into the deep network and evaluate them on the image and video classification tasks. Experiments show that the networks with our proposed blocks are more robust and have a higher accuracy than using other types of nonlocal blocks. To summarize, our contributions are threefold:
• We propose a spectral nonlocal (SNL) block as an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks, which is a generalization of the classical nonlocal blocks.
• We propose the stable hypothesis, which can enable the deeper nonlocal structure without an elaborate preparation for both the number and position of the building blocks. We further extend SNL into generalized SNL (gSNL), which can enable multiple nonlocal blocks to be plugged into the existing computer vision architectures with stable learning dynamics.
• Both SNL and gSNL have outperformed other nonlocal blocks across both image and video classification tasks with a clear-cut improvement.
2 PRELIMINARY
Nonlocal block The NL block consist of NL operator with residual connection and is expressed as: Y = X + F(A,Z) with Z = XWg, (1)
where X ∈ RN×C1 is the input feature map, F(A,Z) is the NL operator, Z ∈ RN×Cs is the transferred feature map that compresses the channels of X ∈ RN×C1 by a linear transformation with kernel Wg ∈ RC1×Cs . Here N is the number of positions. The affinity matrix A ∈ RN×N is composed by pairwise correlations between pixels.
In the NL block, the NL operator explores the “full-range” dependencies by concerning the relationships between all the position pairs:
F(A,Z) = AZW with A = (aij)N×N , Aij = f(Xi,:,Xj,:), (2) where W ∈ RCs×C1 is the weight matrix of a linear transformation. f(·) is the affinity kernel which can adopt the “Dot Product”, “Traditional Gasuassian”, “Embedded Gasussian” or other kernel matrix with a finite Frobenius norm.
Nonlocal stage To make the NL operator follow the diffusion nature that allows deeper nonlocal structure (Tao et al. (2018)), the nonlocal stage (NS) operator uses the graph laplacian L = DA−A to replace the affinity matrix A in the NL operator:
F̄(A,Z) = (A−DA)ZW with DA = diag(di), (3) where F̄(A,Z) is the NS operator. di = ∑ j aij is the degree of node i. Moreover, when adding multiple blocks with the same affinity matrix A and replacing the NL operator by the NS operator, these consecutively-connected blocks become the NS block. We called these nonlocal blocks in the NS block as the NS units.
3 METHOD
The nonlocal operator can be divided into two steps: calculating the affinity matrix A to represent the correlations between each position pairs and refining the feature map by calculating the
weighted means based on A. In this section, a fully-connected graph filter is utilized for explaining the nonlocal operator. With the Chebyshev approximation, we propose the SNL operator which is proved to be a generalized form of NL and NS operator and is more robust with higher performance in computer vision tasks. Furthermore, based on the stable hypothesis that deeper nonlocal structure tends to learn a stable affinity matrix, we extend our SNL operator into a full-order Chebyshev approximation version, i.e. the gSNL.
3.1 THE PROPOSED SPECTRAL NONLOCAL OPERATOR
Nonlocal operator in the graph view The nonlocal operator F(A,Z) is a filter that computes a weighted mean of all the positions in the feature map Z based on the affinity matrix A and then conduct the feature transformation with the kernel W. This is the same as filtering the signal Z by a graph filter Ω in the graph domain defined by the affinity matrix A (Shuman et al. (2013)). Based on this perspective (Shuman et al. (2013)), we further define the nonlocal operator as: Theorem 1. Given an affinity matrix A ∈ RN×N and the signal Z ∈ RN×Cs , the nonlocal operator is the same as filtering the signal Z in the graph domain of a fully-connected weighted graph G:
F(A,Z) = Z ∗ g = Ugθ(Λ)UTZ = UΩUTZ with L = DL −A = UTΛU,
(4)
where the graph filter Ω ∈ RN×N is a diagonal parameter matrix, i.e. Ω = diag(ω), ω = (ω1, ω2, ..., ωn). G = (V,A) is a fully-connected graph with the vertex set V and affinity matrix A. Λ = diag({λ1, λ2, ..., λi, ..., λN}) and U = {u1,u2, ...,ui, ...,uN} are the eigenvectors and eigenvalues of the graph laplacian L.
This definition requires that the graph laplacian L has non-singular eigenvalue and eigenvector, so the affinity matrix A should be a symmetric, non-negative, row-normalized matrix. To meet this requirement, the affinity matrix A can be obtained by the following steps. First, the affinity kernel is used to calculate the matrix A (we use the dot product with embeded weight matrix Wφ ∈ RC1×Cs and Wϕ ∈ RC1×Cs as the affinity kernel, i.e. A = (XWφ)(XWϕ)). Then we make the matrix A symmetric: Ā = A
T+A 2 . We normalize the row of Ā to make it satisfy di = 1 and having Ǎ =
D−1A Ā. For the simplicity, in the following sections the symmetric, non-negative, row-normalized matrix Ǎ is denoted as A.
The proposed spectral nonlocal operator The graph filter Ω in Eq. (4) contains N parameters. To simplify it, we use the Chebyshev polynomials which can reduce the N parameters into k (k N ). For simplicity, we firstly assume that the input Z, the output F(A,Z) and the output F(A,Z) have only one channel.
Following the similar method as Defferrard et al. (2016), the kst-order Chebyshev polynomials is used to approximate the graph filter function gθ(Λ):
F(A,Z) = K−1∑ k=0 θkTk(L ′ )Z with L ′ = 2L/λmax − In, s.t. T0(L ′ ) = In, T1(L ′ ) = L ′ , Tk(L ′ ) = 2L ′ Tk−1(L ′ )− Tk−2(L ′ ).
(5)
Due to L is a random walk laplacican, the maximum eiginvalue λmax satisfies λmax = 2 which makes L ′ = A (Shuman et al. (2013)). Then Eq. (5) becomes:
F(A,Z) = K−1∑ k=0 θkTk(A)Z = θ0Z + θ1AZ + K−1∑ k=2 θkTk(A)Z, (6)
If k = 1, the first-order Chebyshev approximation of Eq. (6) becomes:
F(A,Z) = θ0Z + θ1AZ, (7) where θ0 and θ1 are the coefficients for the first and second term which are approximated by learning with SGD. Then, extending Eq. (7) into multi-channel conditions, we can get the formation of our SNL operator:
Fs(A,Z) = ZW1 + AZW2, (8)
where Fs(A,Z) is the SNL operator, W1 ∈ RCs×C1 , W2 ∈ RCs×C1 . Finally, a residual connection is added with the SNL operator to form the SNL block:
Y = X + Fs(A,Z) = X + ZW1 + AZW2. (9)
Relation with other nonlocal operators As shown in fig. 1, our SNL operator can degrade into the NL operator by setting W1 = 0, i.e. θ0 = 0. However, its analytic solution: θ0 = 2N ∑N j=0 ωj controls the total filtering intensity, which cannot be guaranteed to be 0. This setting will limit the search space when training the network and reduce the robustness of the NL block. The NL operator cannot magnify features of a large range and damp some discriminative features such as the beak of the waterfowl. Our SNL operator can also degrade into the NS operator by setting W1 = −W2, i.e. θ1 + θ0 = 0. However, the analytic solution of this equation is θ1 + θ0 = 2N ∑N j=0 ωj(λj + 1) = 0. When setting it to zero, the filter strength of the high-frequency signal (with high λ) such as the small part or twig is suppressed. Thus, it still cannot magnify the discriminative part such as the beak of the waterfowl as shown in fig. 1. Comparing with NL and NS, our SNL does not have these irrational constraints and give these two parameters a liberal learning space. Thus, θ0 can control the preserve strength of the discriminative features, while θ1 can pay more attention to the low-frequency signal to diminish the noise.
3.2 THE PROPOSED GENERALIZED SPECTRAL NONLOCAL OPERATOR
To fully exploit the “full-range” dependencies, the nonlocal block should have the ability to be consecutively stacked into the network to form a deeper nonlocal structure. However, some types of nonlocal blocks such as the NL and CGNL block cannot achieve this purpose (Tao et al. (2018)). To show the robustness of our SNL block when used in the deeper nonlocal structure, we firstly study the steady-state of deeper nonlocal structure when consecutively adding our SNL block. We also prove the stable hypothesis that the deeper nonlocal structure tends to learn a stable affinity. Based on this hypothesis, we can extend our SNL block into a full-order Chebyshev approximation, i.e. the gSNL block which is more applicable for deeper nonlocal structure.
The stable hypothesis The Steady-state analysis can be used to analyze the stable dynamics of the nonlocal block. Here we give the steady-state analysis of our SNL block when consecutively adds into the network structure and get the Stable Hypothesis:
Lemma 1. The Stable Hypothesis: when adding more than two consecutively-connected SNL blocks with the same affinity matrix A into the network structure, these SNL blocks are stable when the variable affinity matrix A satisfies: Ak = A.
Proof. The stability holds when the weight parameters in W1,W2 and W are small enough such that the CFL condition is satisfied (Tao et al. (2018)). So we ignore them for simplicity. The discrete nonlinear operator of our SNL have a similar formulation as the NS operator:
LhZN := −LZ,
where h is the discretization parameter. ZN is the input of the N th block in the deeper nonlocal structure with Z0 = X. The stable assumption demands that ZN+1 = ZN , so the steady-state equation of the last SNL block can be written as:
ZN+1 − ZN = LhZN = −LZN = 0.
The deeper nonlocal structure has more than one SNL blocks. So the ZN−1 and LhZN−1 can be used to express ZN :
−LZN = −(I−A)ZN = −(I−A)(ZN−1 + LhZN−1) = −(I−A)ZN−1 + (I−A)(I−A)ZN−1 = 0.
Finally, the steady-state equation becomes:
(I−A)ZN−1 = (I−A)2ZN−1 ⇐⇒ A2 = A
This equation can naturally extend to the k-hop affinity matrix Ak, i.e. Ak = A.
To verify the stable hypothesis, we add five consecutively-connected SNL blocks (and NS blocks) into the PreResnet56 He et al. (2016) and train this model on the train set of the CIFAR100 dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 150 and 250 epochs (total 300 epochs). A weight decay 1e − 4 and momentum 0.9 are also used. Then we test the trained model on the test set and output the affinity matrix of each image. Figure. 2 shows the statistics that reflects the strength of the affinity matrix, 2-hop, 3-hop, and 4-hop affinity matrix: A,A2,A3,A4. We can see that the number of elements in each histogram bin are nearly the same. This means that
the A, A2, A3, A4 have similar distribution of all the elements in k-hop affinity matrixes, which also empirically verifies the stable-state equation: Ak = A. Full-order spectral nonlocal operator With the stable hypothesis, the Chebyshev polynomials can be simplified into a piece-wise function (details in Appendix B). Taking this piece-wise function into the Eq. 7, we can get the full-order approximation of the SNL operator:
F∗s (A,Z) = ∑ k θkTk(A)Z = Zθ̃1 + AZθ̃2 + (2A− I)Zθ̃3, (10)
where θ̃1 = ∑k%4=0 i1 θi1 , θ̃2 = ∑k%4=1||k%4=2 i2 θi1 , θ̃3 = ∑k%4=3 i1
θi1 , whose upper bound is less than 1. Then, extending it into multi-channel input and output with the residual connection, we can get our gSNL block:
Y = X + F∗s (A,Z) = X + ZW1 + AZW2 + (2A− I)ZW3 (11)
The gSNL block is well-performed when the stable affinity hypothesis is satisfied, i.e. adding more than two nonlocal blocks with the same affinity matrix as shown in Table. 4.
3.3 IMPLEMENTATION DETAILS
The implementation details of the gSNL block is shown in fig. 3. The input feature map X ∈ RW×H×C1 is first fed into three 1x1 convolutions with the weight kernel: Wφ ∈ RC1×Cs , Wϕ ∈ RC1×Cs , Wg ∈ RC1×Cs to subtract the number of channel. One of the output Z ∈ RW×H×Cs is used as the transferred feature map to reduce the calculation complexity, while the other two output Φ ∈ RW×H×Cs , Ψ ∈ RW×H×Cs are used to get the affinity matrix A. The sub-channel Cs are usually two times less than the input channel C1. The affinity matrix is calculated by the affinity kernel function f(·) and then use the operation in Sec3.1 to make it non-negative, symmetric and normalized. Finally, with the affinity matrix A and the transferred feature map Z, the output of the nonlocal block can be obtained by the equation Eq. (11). Specifically, the three weight matrixes W1 ∈ RCs×C1 , W2 ∈ RCs×C1 , W3 ∈ RCs×C1 are implemented as three 1x1 convolutions.
4 EXPERIMENT
4.1 SETTING
Datasets Our proposed SNL and gSNL blocks have been evaluated across several computer vision tasks, including image classification and video-based action recognition. For the image classification, both CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton (2009)) are tested. The CIFAR10 dataset contains 60, 000 images of 10 classes, and CIFAR-100 dataset contains 60, 000 images of 100 classes. For these two datasets, we use 50, 000 images as the train set and 10, 000 images as
the test set. We also generate experiments for the fine-grained classification on the Birds-200-2011 (CUB-200) dataset (Welinder et al. (2010)) which contains 11, 788 images of 200 bird categories. For the action recognition, the experiments are conducted on the UCF-101 dataset (Soomro et al. (2012)), which contains 101 different actions.
Backbones For the image classification, the ResNet-50 and the PreResNet variations (including both PreResNet-20 and PreResNet-56) are used as the backbone networks. For the video classification task, we follow the I3D structure (Hara et al. (2018)) which uses k × k × k kernels to replace the convolution operator in the residual block.
Setting for the network In the main experiments, we setCs = C1/2. Without loss of the generality, we use the “Dot Product” as the affinity kernel in the experiments. We add one SNL (or gSNL) block into these backbone networks to construct the SNL (or gSNL) network. For the ResNet and the I3D (Hara et al. (2018)), following Wang et al. (2018) we add the SNL block right before the last residual block of res4. For the PreResNet series, we add the SNL block right after the second residual block in res1. For the other nonlocal-base block including the NL (Wang et al. (2018)), the NS (Tao et al. (2018)), the Compact Generalized Nonlocal Block (CGNL) (Yue et al. (2018)) and the Double Attention Block (A2), the settings are all the same as ours. The difference of these blocks are shown in Table. 1, in which the Approximated Condition shows the strategy for the Chebyshev approximation and Channel-wise reflect the consideration of the channel relations.
Setting for the training For the image classification on CIFAR-10 dataset and CIFAR-100 dataset, we train the models end-to-end without using pretrained model. The initial learning rate 0.1 is used for these two datasets with the weight decay 1e− 4 and momentum 0.9. The learning rate is divided by 10 at 150 and 250 epochs. The models are trained for total 300 epochs.
For the fine-grained classification on CUB-200 dataset, we use the models pretrained on ImageNet (Russakovsky et al. (2015)) to initialize the weights. We train the models for total 200 epochs with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61, 81 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100.
For the video classification on the UCF-101 dataset, the weights are initialized by the pretrained I3D model on Kinetics dataset (Kay et al. (2017)). We train the models with the initial learning rate 0.1 which is subsequently divided by 10 each 40 epochs. The training stops at the 100 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100.
4.2 ABLATION EXPERIMENT
The number of channels in transferred feature space The nonlocal-based block firstly reduces the channels of original feature mapC1 into the transferred feature spaceCs by the 1×1 convolution to reduce the computation complexity. When Cs is too large, the feature map will contain redundant information which introduces the noise when calculating the affinity matrix A. However, if Cs is too small, it is hard to reconstruct the output feature map due to inadequate features. To test the robustness for the number of the Cs, we generate three types of models with different number of the transferred channels with the setting: “Sub 1” (Cs = C1), “Sub 2” (Cs = C12 ), “Sub 4” (Cs = C1 4 ) as shown in Table. 2. Other parameters of the models and the training steps are the same as the setting in Sec.4.1. Table. 2 shows the experimental results of the three types of models with different nonlocal blocks. Our SNL and gSNL blocks outperforms other models profited by their flexible for learning. Moreover, from Table. 2, we can see that the performances of the CGNL steeply drops when the number of the transferred channels increases. This is because the CGNL block concerns
the relationship between channels, when the number of the sub-channel increases, the relationship between the redundant channels seriously interferes its effects. Overall, our proposed nonlocal block is the most robust for the large number of transferred channels (our model rise 1.1% in Top1 while the best of others only rise 0.4% compared to the baseline).
The stage for adding the nonlocal blocks The nonlocal-based blocks can be added into the different stages of the preResNet (or the ResNet) to form the Nonlocal Net. In Tao et al. (2018), the nonlocalbased blocks are added into the early stage of the preResNet to catch the long-range correlations. Here we experiment the performance of adding different types of nonlocal blocks into the three stages (the first, the second and the third stage of the preResNet) and train the models on CIFAR100 dataset with the same setting discussed in Sec.5.2. The experimental results are shown in Table. 3. We can see that the performances of the NL block is lower than the backbones when adding into the early stage. However, our proposed SNL block has 0.81% improvement compared with the backbone when respectively adding into all the three stages, which is much higher than the other type nonlocal blocks (only 0.42% for the best case).
To intuitively show the stability and robustness of our SNL, we give the spectrum analysis for the estimated weight matrices (Tao et al. (2018)). We extract the self-attention weight matrix: Wg,W of the NL block and the NS block, Wg,W2 of our proposed SNL block. The dimension of the weight matrix satisfies: Wg ∈ RC1×Cs , W ∈ RCs×C1 W2 ∈ RCs×C1 . To make all the eigenvalues real, we let: W̃ = (WgW)+(WgW) T
2 . We do the same to the W2. Figure. 5 shows the top thirtytwo eigenvalues of the weight matrix of W̃ on the models in Table. 3. We can see that the density of the negative eigenvalues is higher than the positive eigenvalues of the NL block when adding into all three stages. This phenomenon makes the NL operator F(A,Z) in Eq. (1) less than zero. So the output feature map is less than the input feature map, i.e. Y < X (more detail of this phenomenon can be seen in Tao et al. (2018)). The NS block can avoid “the damping effect” to some extent by concerning the diffusion nature. However, when adding into the early stage, only six eigenvalues of the nonlocal stage are not equal to zero. This phenomenon makes the nonlocal stage cannot effectively magnify the discriminated feature. Comparing with these two models, our proposed SNL block has more positive eigenvalues which takes effect to enhance the discriminated features and also avoids the “damping effect”.
The number of the nonlocal blocks We test the robustness for adding multiple nonlocal blocks into the backbone network which forms the three type network “Different Position 3 (DP 3)”, “Same Position 3 (SP 3)” “Same Position 5 (SP 5)” as shown in Table. 4. The result are shown in Table. 4. For the model “DP3”, three blocks are added into the stage 1, stage 2, and stage 3 (right after the second residual block). We can see that adding three proposed nonlocal operators into different stages of the backbone generate a larger improvement than the NS operator and NL operator (2.4% improvement). This is because when adding NS and NL into the early stage, these two models cannot better aggregate the low-level features and interfere the following blocks. For the model “SP 3” (“SP 5”), we add three (five) consecutively-connected nonlocal blocks into the stage 1. Note that different from the experiment in Tao et al. (2018) and Wang et al. (2018), these consecutivelyconnected nonlocal blocks have the same affinity matrix. From Table. 4, we can see that profited by concerning the stable hypothesis discussed in Sec 3.3, our gSNL outperform all other models when adding consecutively-connected nonlocal blocks (rises average 0.72% to the backbone and 0.41% higher than the best performance of other type nonlocal blocks) and has a relatively stable performance. However, one drawback is that our gSNL may interfere the learning when adding only one nonlocal block (the stable hypothesis is not satisfied).
4.3 MAIN RESULTS
We test the networks with the Nonlocal Block (NL), the Nonlocal Stage (NS), the Compact Generalized Nonlocal block (CGNL), the Double Attention Block (A2) and our SNL (gSNL) blocks in the different visual learning tasks. The experiment settings are discussed in Sec.4.1. Our models outperform other types of the nonlocal blocks across several standard benchmarks. Table. 5 shows the experimental results on the CIFAR10 dataset, we can see that by adding one proposed block, the Top1 rises about 0.65%, which is higher than adding other type nonlocal blocks (0.3%). As the experiments on CIFAR100 dataset shown in Table. 7, using our proposed block brings improvement about 1.8% with ResNet50. While using a more simple backbone PreResnet56, our model can still generate 1.1% improvement as shown in Table. 6.
Table. 9 shows the experimental results on the fine-grained image classification task on CUB-200 datasets. Our model outperforms other non-channel-concerning blocks and generate (0.42%) im-
provement. Comparing with the channel-wise concerning CGNL block, our model is only a bit lower in Top1. Fig. 4 also shows the visualized feature map which is formed by adding the upsampled feature output with the source image. We can see that the feature maps of our proposed block can cover more critical area of the birds. For example, both the left and right wings (red square) of the birds can be focused profited by the better long-range concerning of our SNL. Moreover, benefited from the flexibility of the W1, our proposed SNL can also catch a relatively large range of the discriminative parts. Table. 8 shows the experimental results on the action recognition task. The network with our proposed block can generate 1.8% improvement than the I3D model and outperforms all other nonlocal models on the UCF-101 dataset.
Table 8: The Results on UCF101
model top1 top5 I3D 81.57% 95.40% + NL 81.37% 95.76% + NS 82.50% 95.84% + A2 82.68% 95.85% + CGNL 83.16% 96.16 % + *SNL 82.30% 95.56% + *gSNL 83.21% 96.53%
Table 9: The Results on CUB
model top1 top5 R-50 85.43% 96.70% + NL 85.34% 96.77% + NS 85.54% 96.56% + A2 86.02% 96.56% + CGNL 86.14% 96.34% + *SNL 85.91% 96.65% + *gSNL 85.95% 96.79%
5 CONCLUSION
In this paper, we explain the nonlocal block in the graph view and propose the spectral nonlocal (SNL) block which is more robust and well-behaved. Our SNL block is a generalized version of the NL and NS block and having more liberty for the parameter learning. We also give the stable hypothesis for deeper nonlocal structure and extend the SNL to gSNL that can be applied to the deeper nonlocal structures. The experiments on multiple computer vision tasks show the high robustness and performance of our proposed nonlocal block. Feature works will focus on using the SNL block into different vision task and its roubustness for the other type of neural network such as the Generative Adversarial Networks (GAN).
A ANALYTIC SOLUTION OF THE CHEBYSHEV APPROXIMATE
Here we give the analytic solution for the coefficients in Chebyshev polynomials (Phillips (2003)):
Theorem 2. Giving a function f(x), x = {x1, x2, ..., xN}, it can be optimally approximated by Chebyshev polynomials: f(x) ≈ ∑K−1 k=0 akTk(x), only when ak satisfies: ak = 2 N ∑N j=0 f(xj)Tk(xj). We call the ak as the analytic solution of the Chebyshev coeffcients.
Based on these theorem, we can get the analytic solution of the parameter θ for Eq. (7):
Lemma 2. The spectral nonlocal operator can be best approximated when the function g(λ) = ω can be best approximated by the Chebyshev polynomials, i.e. the analytic solutions of the Chebyshev coeffcients satisfy:
θk = ak = 2
N N∑ j=0 g(λj)Tk(λj) = 2 N N∑ j=0 ωjTk(λj) (12)
B THE PIECEWISE CHEBYSHEV POLYNOMIALS
Taking Ak = A into the Chebyshev polynomials of the affinity matrix A, the Chebyshev polynomials becomes:
T0(A) = I
T1(A) = A
T2(A) = 2AT1(A)− T0(A) = 2AA− I = 2A− I T3(A) = 2AT2(A)− T1(A) = 2A(2A− I)−A = A T4(A) = 2AT3(A)− T2(A) = 2AA− 2A + I = I = T0(A) T5(A) = 2AT4(A)− T3(A) = 2AI−A = A = T1(A) T6(A) = 2AT5(A)− T4(A) = 2 ∗ T2(A)− T1(A) = T2(A)
(13)
This cyclic form of Chebshev polynomials Tk(A) can be reformulated as a piecewise function:
Tk(A) =
{ I k%4 = 0
A k%4 = 1 || k%4 = 3 2A− I k%4 = 2
(14)
C EXPERIMENT OF SEMANTIC SEGMENTATION ON VOC2012 DATASET
For the semantic segmentation tasks, we generate experiment on the VOC2012 dataset with the model proposed by Chen et al. (2017).We add different types of nonlocal blocks on right before the last residual block in res4 of the ResNet50. The models are trained for 50 epochs with the SGD optimize algorithm. The learning rate is set 0.007 with the weight decay 5e− 4 and momentum 0.9. Experimental results show that the model with our proposed block can the best results.
D THE EXAMPLE OF THE AFFINITY MATRIX ON CUB DATASETS
Experiments to verify the stable hypothesis is also generated on the CUB datasets, we add three consecutively-connected SNL blocks (and NS blocks) into the ResNet50 (right before the last residual block of res4) and train this model on the train set of the CUB dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61 and 81 epochs (total 200 epochs). A weight decay 1e− 4 and momentum 0.9 are also used. Figure. 6 shows the histogram of the strength statistics of the affinity matrix A. We can see that although using different backbone and dataset, the distribution of the k-hop affinity matrixes are corresponded with the experiments on CIFAR100.
E EXPERIMENTS ON VIDEO-BASED PERSON RE-IDENTIFICATION
Experiments are also conducted on the challenging datasets on Video-based Person Re-identification task including the Mars, ILID-SVID and PRID2011. For the backbone, we follow the strategy of Gao & Nevatia (2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatialtemporal features. Note that the models are totally trained on ilidsvid and prid2011 rather than finetuning the pre-trained model on Mars dataset. The experimental results are shown in Table.11, 12, 13. We can see that in these datasets, our proposed block can still generate consistent improvements.
Table 11: The Results on Mars dataset
model mAP Rank1 RTMta 77.70% 79.10% + NL 72.90% 80.90% + *SNL 74.00% 81.98% RTMtp 75.70% 82.30% + NL 75.54% 83.40% + *SNL 76.80% 99.92%
Table 12: The Results on ILIDSVID dataset
model mAP Rank1 RTMta 69.70% 58.70% + NL 66.30% 56.00% + *SNL 79.40% 70.00% RTMtp 81.60% 74.70% + NL 83.00% 75.30% + *SNL 84.80% 76.60%
Table 13: The Results on PRID2011 dataset
model mAP Rank1 RTMta 86.60% 79.80% + NL 90.70% 85.40% + *SNL 91.50% 86.50% RTMtp 90.50% 86.50% + NL 89.70% 85.40% + *SNL 92.40% 88.80%
F ADDITIONAL EXPERIMENTS ON ACTION CLASSIFICATION
Ours SNL can also improve the performance of other network structures such as the Pseudo 3D Convolutional Network (P3D) (Qiu et al. (2017)), the Motion-augmented RGB Stream (MARS) (Crasto et al. (2019)), the Slow-Fast Network (Slow-Fast) (Feichtenhofer et al. (2019)) and the Video Transformer Network (VTN) (Kozlov et al. (2019)). For P3D and MARS, our SNL block is inserted right before the last residual layer of the res3. For the Slow-Fast, we replace its original NL block with our SNL block. For the VTN, we replace its multi-head self-attention blocks (paralleledconnected NL blocks) with our SNL blocks. The Slow-Fast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. From Table. 14, We can see that all the performances are improved when adding our proposed SNL model.
Experiments on Kinetics-400 dataset are also given in Table. 15. We can see that inserting SNL block into the Slow-Fast Network can generate 2.1% improvement. | 1. How does the proposed spectral non-local block differ from existing non-local methods in the literature?
2. Can you provide more explanation and support for the reasonability of the experiments conducted in the paper?
3. How does the proposed method compare with state-of-the-art methods in image classification and action recognition tasks?
4. Can you clarify the conclusion of Table 4 and how it relates to the hypothesis of deeper non-local structure?
5. Can you provide more background descriptions and interpretations of the results presented in Figure 4?
6. How do the critical parts on birds relate to long-range dependency?
7. Can you clarify the informal use of English, mismatched descriptions, and undefined acronyms in the paper, such as the terms self-attention and self-preserving, CGNL, A2, Hadama (Hadamard?) product?
8. Can you provide more explanation for the grammar errors and informal use of English present in the paper? | Review | Review
The paper proposes a spectral non-local block, which is a generalized method of the non-local block and non-local stage in the literature. The proposed spectral non-local block can be plugged into a neural network to improve its effectiveness. The paper also provides theoretical analyses of the stability of the proposed method, and also extend the method by including more Chebyshev polynomial terms. Experiments are conducted on image classification and action recognition tasks, and they valid the effectiveness of the proposed method.
The idea is well-motivated, and it is a generalization of existing works in the literature. I do like this idea. However, I am afraid that the idea is not well explained and supported, thus I gave a weak reject to encourage the authors to further improve the paper.
The major concern I have is the reasonability of the experiments. The experiments in the paper show relative performance gain with respect to a baseline method. It seems that there is a lack of comparison with state-of-the-art methods in the literature. For example, in Table 8, a performance gain is observed when compared with I3D. However, the recent STOA models can achieve much higher accuracy than the baseline. and also the proposed method. Since the proposed method is generic to all neural nets, it makes more sense to compare with SOTA and make improvements based on SOTA. What is the conclusion from Table 4? Are you trying to demonstrate that the best configuration is DP3, and increasing the number of consecutive non-local blocks (from SP3 to SP5) doesn't work? It is awkward since the paper gives a stable hypothesis for deeper nonlocal structure, but experimentally the deeper structure doesn't work well. Figure 4 is abrupt without much background descriptions. Are the images randomly chosen? Ours here means SNL or gSNL? Is the colored superimposition the attention map (I believe so but the paper doesn't indicate so) and how to interpret it? What is the relation of the coverage of the critical parts on birds and the long-range dependency? More background descriptions and interpretations of the results are needed.
Another concern I have is the clarity of the writing. There are quite a number of informal use of English, mismatched descriptions, undefined acronyms, etc. For example, in the caption of Fig. 1, it is said self-attention and self-preserving are taken effect by W1 and W2, which is contradictory to what is illustrated in the figure. Also, the terms self-attention and self-preserving, and other terms such as CGNL, A2, Hadama (Hadamard?) product, are not formally defined or described. A lot of grammar errors and informal use of English are present, such as "which lead to", "the weight means", "when using in the neural network", "fig. 4", "Figure. 2", "more liberty for the parameter learning.", etc. |
ICLR | Title
Spectral Nonlocal Block for Neural Network
Abstract
The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.
1 INTRODUCTION
Capturing the long-range spatial-temporal dependencies is crucial for the Deep Convolutional Neural Networks (CNNs) to extract discriminate features in vision tasks such as image and video classification. However, the traditional convolution operator only focuses on processing local neighborhood at a time. This makes the CNNs need to go deeper with convolutional operations to enlarge the receptive fields, which lead to higher computation and memory. Moreover, going deeper cannot always increase the effective receptive fields due to the Gaussian distribution of the kernel weight (Luo et al. (2016)). To eliminate this limitation, some recent works focus on designing the network architecture with wider and well-designed modules to catch the long-range dependencies such as (Peng et al. (2017), Chen et al. (2017), Zhao et al. (2017)). Although having larger receptive fields, these modules still need to be applied recursively to catch the dependencies of the pairs in large distances.
Inspired by the classical non-local means method in image denoising, Wang et al. (2018) proposes the nonlocal neural network which uses the nonlocal (NL) block to concern the “full-range” dependencies in only one module by exploring the correlations between each position and all other positions. In the NL block, the affinity matrix is first computed to represent the correlations between each position pair. Then the weight means of features are calculated based on the affinity matrix to refine the feature representation. Finally, the residual connection is added to the refined feature map. Due to its simplicity and effectiveness, the nonlocal block has been widely used in image and video classification (Wang et al. (2018); Yue et al. (2018); Tao et al. (2018); Chen et al. (2018)), image segmentation (Huang et al. (2018); Yue et al. (2018); Wang et al. (2018)) and person re-identification (Liao et al. (2018); Zhang et al. (2019)) recently.
However, due to the complexity of the affinity matrix, the nonlocal block 1 needs much more computational effort and is sensitive to its number and position in the neural network (Tao et al. (2018)). Some works solve the first problem by simplifying the calculation of the affinity matrix such as Huang et al. (2018), He et al. (2019), Yue et al. (2018), Chen et al. (2018). Only a few works try to solve the second problem which limits the robustness of the nonlocal network 2. Tao et al. (2018)
1The nonlocal block is composed of a nonlocal operator and a residual connection 2The nonlocal network is composed of several nonlocal blocks
proposes the nonlocal stage (NS) block which concerns the diffusion nature and maintains the same affinity matrix for all the nonlocal units in the NS block. Comparing with the NL block, the NS block is insensitive to the numbers and allows deeper nonlocal structure. However, the deeper nonlocal structure of NS block increases the complexity and do not have a remarkable improvement.
In this work, we focus on elaborating a robust nonlocal block which is more flexible when using in the neural network. We prove that the nonlocal operator in the nonlocal block is equivalent to the Chebyshev-approximated fully-connected graph filter with irrational constraints that limits its liberty for learning. To remove these irrational constraints, we propose the Spectral-based Nonlocal (SNL) block which is more robust and can degrade into the NL and NS with specific assumptions. We also prove that the deeper nonlocal structure satisfies the stable hypothesis with the help of steadystate analysis. Based on this hypothesis, we give the full-order approximated spectral nonlocal (gSNL) block which is well-performed for deeper nonlocal structure. Finally, we add our proposed nonlocal blocks into the deep network and evaluate them on the image and video classification tasks. Experiments show that the networks with our proposed blocks are more robust and have a higher accuracy than using other types of nonlocal blocks. To summarize, our contributions are threefold:
• We propose a spectral nonlocal (SNL) block as an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks, which is a generalization of the classical nonlocal blocks.
• We propose the stable hypothesis, which can enable the deeper nonlocal structure without an elaborate preparation for both the number and position of the building blocks. We further extend SNL into generalized SNL (gSNL), which can enable multiple nonlocal blocks to be plugged into the existing computer vision architectures with stable learning dynamics.
• Both SNL and gSNL have outperformed other nonlocal blocks across both image and video classification tasks with a clear-cut improvement.
2 PRELIMINARY
Nonlocal block The NL block consist of NL operator with residual connection and is expressed as: Y = X + F(A,Z) with Z = XWg, (1)
where X ∈ RN×C1 is the input feature map, F(A,Z) is the NL operator, Z ∈ RN×Cs is the transferred feature map that compresses the channels of X ∈ RN×C1 by a linear transformation with kernel Wg ∈ RC1×Cs . Here N is the number of positions. The affinity matrix A ∈ RN×N is composed by pairwise correlations between pixels.
In the NL block, the NL operator explores the “full-range” dependencies by concerning the relationships between all the position pairs:
F(A,Z) = AZW with A = (aij)N×N , Aij = f(Xi,:,Xj,:), (2) where W ∈ RCs×C1 is the weight matrix of a linear transformation. f(·) is the affinity kernel which can adopt the “Dot Product”, “Traditional Gasuassian”, “Embedded Gasussian” or other kernel matrix with a finite Frobenius norm.
Nonlocal stage To make the NL operator follow the diffusion nature that allows deeper nonlocal structure (Tao et al. (2018)), the nonlocal stage (NS) operator uses the graph laplacian L = DA−A to replace the affinity matrix A in the NL operator:
F̄(A,Z) = (A−DA)ZW with DA = diag(di), (3) where F̄(A,Z) is the NS operator. di = ∑ j aij is the degree of node i. Moreover, when adding multiple blocks with the same affinity matrix A and replacing the NL operator by the NS operator, these consecutively-connected blocks become the NS block. We called these nonlocal blocks in the NS block as the NS units.
3 METHOD
The nonlocal operator can be divided into two steps: calculating the affinity matrix A to represent the correlations between each position pairs and refining the feature map by calculating the
weighted means based on A. In this section, a fully-connected graph filter is utilized for explaining the nonlocal operator. With the Chebyshev approximation, we propose the SNL operator which is proved to be a generalized form of NL and NS operator and is more robust with higher performance in computer vision tasks. Furthermore, based on the stable hypothesis that deeper nonlocal structure tends to learn a stable affinity matrix, we extend our SNL operator into a full-order Chebyshev approximation version, i.e. the gSNL.
3.1 THE PROPOSED SPECTRAL NONLOCAL OPERATOR
Nonlocal operator in the graph view The nonlocal operator F(A,Z) is a filter that computes a weighted mean of all the positions in the feature map Z based on the affinity matrix A and then conduct the feature transformation with the kernel W. This is the same as filtering the signal Z by a graph filter Ω in the graph domain defined by the affinity matrix A (Shuman et al. (2013)). Based on this perspective (Shuman et al. (2013)), we further define the nonlocal operator as: Theorem 1. Given an affinity matrix A ∈ RN×N and the signal Z ∈ RN×Cs , the nonlocal operator is the same as filtering the signal Z in the graph domain of a fully-connected weighted graph G:
F(A,Z) = Z ∗ g = Ugθ(Λ)UTZ = UΩUTZ with L = DL −A = UTΛU,
(4)
where the graph filter Ω ∈ RN×N is a diagonal parameter matrix, i.e. Ω = diag(ω), ω = (ω1, ω2, ..., ωn). G = (V,A) is a fully-connected graph with the vertex set V and affinity matrix A. Λ = diag({λ1, λ2, ..., λi, ..., λN}) and U = {u1,u2, ...,ui, ...,uN} are the eigenvectors and eigenvalues of the graph laplacian L.
This definition requires that the graph laplacian L has non-singular eigenvalue and eigenvector, so the affinity matrix A should be a symmetric, non-negative, row-normalized matrix. To meet this requirement, the affinity matrix A can be obtained by the following steps. First, the affinity kernel is used to calculate the matrix A (we use the dot product with embeded weight matrix Wφ ∈ RC1×Cs and Wϕ ∈ RC1×Cs as the affinity kernel, i.e. A = (XWφ)(XWϕ)). Then we make the matrix A symmetric: Ā = A
T+A 2 . We normalize the row of Ā to make it satisfy di = 1 and having Ǎ =
D−1A Ā. For the simplicity, in the following sections the symmetric, non-negative, row-normalized matrix Ǎ is denoted as A.
The proposed spectral nonlocal operator The graph filter Ω in Eq. (4) contains N parameters. To simplify it, we use the Chebyshev polynomials which can reduce the N parameters into k (k N ). For simplicity, we firstly assume that the input Z, the output F(A,Z) and the output F(A,Z) have only one channel.
Following the similar method as Defferrard et al. (2016), the kst-order Chebyshev polynomials is used to approximate the graph filter function gθ(Λ):
F(A,Z) = K−1∑ k=0 θkTk(L ′ )Z with L ′ = 2L/λmax − In, s.t. T0(L ′ ) = In, T1(L ′ ) = L ′ , Tk(L ′ ) = 2L ′ Tk−1(L ′ )− Tk−2(L ′ ).
(5)
Due to L is a random walk laplacican, the maximum eiginvalue λmax satisfies λmax = 2 which makes L ′ = A (Shuman et al. (2013)). Then Eq. (5) becomes:
F(A,Z) = K−1∑ k=0 θkTk(A)Z = θ0Z + θ1AZ + K−1∑ k=2 θkTk(A)Z, (6)
If k = 1, the first-order Chebyshev approximation of Eq. (6) becomes:
F(A,Z) = θ0Z + θ1AZ, (7) where θ0 and θ1 are the coefficients for the first and second term which are approximated by learning with SGD. Then, extending Eq. (7) into multi-channel conditions, we can get the formation of our SNL operator:
Fs(A,Z) = ZW1 + AZW2, (8)
where Fs(A,Z) is the SNL operator, W1 ∈ RCs×C1 , W2 ∈ RCs×C1 . Finally, a residual connection is added with the SNL operator to form the SNL block:
Y = X + Fs(A,Z) = X + ZW1 + AZW2. (9)
Relation with other nonlocal operators As shown in fig. 1, our SNL operator can degrade into the NL operator by setting W1 = 0, i.e. θ0 = 0. However, its analytic solution: θ0 = 2N ∑N j=0 ωj controls the total filtering intensity, which cannot be guaranteed to be 0. This setting will limit the search space when training the network and reduce the robustness of the NL block. The NL operator cannot magnify features of a large range and damp some discriminative features such as the beak of the waterfowl. Our SNL operator can also degrade into the NS operator by setting W1 = −W2, i.e. θ1 + θ0 = 0. However, the analytic solution of this equation is θ1 + θ0 = 2N ∑N j=0 ωj(λj + 1) = 0. When setting it to zero, the filter strength of the high-frequency signal (with high λ) such as the small part or twig is suppressed. Thus, it still cannot magnify the discriminative part such as the beak of the waterfowl as shown in fig. 1. Comparing with NL and NS, our SNL does not have these irrational constraints and give these two parameters a liberal learning space. Thus, θ0 can control the preserve strength of the discriminative features, while θ1 can pay more attention to the low-frequency signal to diminish the noise.
3.2 THE PROPOSED GENERALIZED SPECTRAL NONLOCAL OPERATOR
To fully exploit the “full-range” dependencies, the nonlocal block should have the ability to be consecutively stacked into the network to form a deeper nonlocal structure. However, some types of nonlocal blocks such as the NL and CGNL block cannot achieve this purpose (Tao et al. (2018)). To show the robustness of our SNL block when used in the deeper nonlocal structure, we firstly study the steady-state of deeper nonlocal structure when consecutively adding our SNL block. We also prove the stable hypothesis that the deeper nonlocal structure tends to learn a stable affinity. Based on this hypothesis, we can extend our SNL block into a full-order Chebyshev approximation, i.e. the gSNL block which is more applicable for deeper nonlocal structure.
The stable hypothesis The Steady-state analysis can be used to analyze the stable dynamics of the nonlocal block. Here we give the steady-state analysis of our SNL block when consecutively adds into the network structure and get the Stable Hypothesis:
Lemma 1. The Stable Hypothesis: when adding more than two consecutively-connected SNL blocks with the same affinity matrix A into the network structure, these SNL blocks are stable when the variable affinity matrix A satisfies: Ak = A.
Proof. The stability holds when the weight parameters in W1,W2 and W are small enough such that the CFL condition is satisfied (Tao et al. (2018)). So we ignore them for simplicity. The discrete nonlinear operator of our SNL have a similar formulation as the NS operator:
LhZN := −LZ,
where h is the discretization parameter. ZN is the input of the N th block in the deeper nonlocal structure with Z0 = X. The stable assumption demands that ZN+1 = ZN , so the steady-state equation of the last SNL block can be written as:
ZN+1 − ZN = LhZN = −LZN = 0.
The deeper nonlocal structure has more than one SNL blocks. So the ZN−1 and LhZN−1 can be used to express ZN :
−LZN = −(I−A)ZN = −(I−A)(ZN−1 + LhZN−1) = −(I−A)ZN−1 + (I−A)(I−A)ZN−1 = 0.
Finally, the steady-state equation becomes:
(I−A)ZN−1 = (I−A)2ZN−1 ⇐⇒ A2 = A
This equation can naturally extend to the k-hop affinity matrix Ak, i.e. Ak = A.
To verify the stable hypothesis, we add five consecutively-connected SNL blocks (and NS blocks) into the PreResnet56 He et al. (2016) and train this model on the train set of the CIFAR100 dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 150 and 250 epochs (total 300 epochs). A weight decay 1e − 4 and momentum 0.9 are also used. Then we test the trained model on the test set and output the affinity matrix of each image. Figure. 2 shows the statistics that reflects the strength of the affinity matrix, 2-hop, 3-hop, and 4-hop affinity matrix: A,A2,A3,A4. We can see that the number of elements in each histogram bin are nearly the same. This means that
the A, A2, A3, A4 have similar distribution of all the elements in k-hop affinity matrixes, which also empirically verifies the stable-state equation: Ak = A. Full-order spectral nonlocal operator With the stable hypothesis, the Chebyshev polynomials can be simplified into a piece-wise function (details in Appendix B). Taking this piece-wise function into the Eq. 7, we can get the full-order approximation of the SNL operator:
F∗s (A,Z) = ∑ k θkTk(A)Z = Zθ̃1 + AZθ̃2 + (2A− I)Zθ̃3, (10)
where θ̃1 = ∑k%4=0 i1 θi1 , θ̃2 = ∑k%4=1||k%4=2 i2 θi1 , θ̃3 = ∑k%4=3 i1
θi1 , whose upper bound is less than 1. Then, extending it into multi-channel input and output with the residual connection, we can get our gSNL block:
Y = X + F∗s (A,Z) = X + ZW1 + AZW2 + (2A− I)ZW3 (11)
The gSNL block is well-performed when the stable affinity hypothesis is satisfied, i.e. adding more than two nonlocal blocks with the same affinity matrix as shown in Table. 4.
3.3 IMPLEMENTATION DETAILS
The implementation details of the gSNL block is shown in fig. 3. The input feature map X ∈ RW×H×C1 is first fed into three 1x1 convolutions with the weight kernel: Wφ ∈ RC1×Cs , Wϕ ∈ RC1×Cs , Wg ∈ RC1×Cs to subtract the number of channel. One of the output Z ∈ RW×H×Cs is used as the transferred feature map to reduce the calculation complexity, while the other two output Φ ∈ RW×H×Cs , Ψ ∈ RW×H×Cs are used to get the affinity matrix A. The sub-channel Cs are usually two times less than the input channel C1. The affinity matrix is calculated by the affinity kernel function f(·) and then use the operation in Sec3.1 to make it non-negative, symmetric and normalized. Finally, with the affinity matrix A and the transferred feature map Z, the output of the nonlocal block can be obtained by the equation Eq. (11). Specifically, the three weight matrixes W1 ∈ RCs×C1 , W2 ∈ RCs×C1 , W3 ∈ RCs×C1 are implemented as three 1x1 convolutions.
4 EXPERIMENT
4.1 SETTING
Datasets Our proposed SNL and gSNL blocks have been evaluated across several computer vision tasks, including image classification and video-based action recognition. For the image classification, both CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton (2009)) are tested. The CIFAR10 dataset contains 60, 000 images of 10 classes, and CIFAR-100 dataset contains 60, 000 images of 100 classes. For these two datasets, we use 50, 000 images as the train set and 10, 000 images as
the test set. We also generate experiments for the fine-grained classification on the Birds-200-2011 (CUB-200) dataset (Welinder et al. (2010)) which contains 11, 788 images of 200 bird categories. For the action recognition, the experiments are conducted on the UCF-101 dataset (Soomro et al. (2012)), which contains 101 different actions.
Backbones For the image classification, the ResNet-50 and the PreResNet variations (including both PreResNet-20 and PreResNet-56) are used as the backbone networks. For the video classification task, we follow the I3D structure (Hara et al. (2018)) which uses k × k × k kernels to replace the convolution operator in the residual block.
Setting for the network In the main experiments, we setCs = C1/2. Without loss of the generality, we use the “Dot Product” as the affinity kernel in the experiments. We add one SNL (or gSNL) block into these backbone networks to construct the SNL (or gSNL) network. For the ResNet and the I3D (Hara et al. (2018)), following Wang et al. (2018) we add the SNL block right before the last residual block of res4. For the PreResNet series, we add the SNL block right after the second residual block in res1. For the other nonlocal-base block including the NL (Wang et al. (2018)), the NS (Tao et al. (2018)), the Compact Generalized Nonlocal Block (CGNL) (Yue et al. (2018)) and the Double Attention Block (A2), the settings are all the same as ours. The difference of these blocks are shown in Table. 1, in which the Approximated Condition shows the strategy for the Chebyshev approximation and Channel-wise reflect the consideration of the channel relations.
Setting for the training For the image classification on CIFAR-10 dataset and CIFAR-100 dataset, we train the models end-to-end without using pretrained model. The initial learning rate 0.1 is used for these two datasets with the weight decay 1e− 4 and momentum 0.9. The learning rate is divided by 10 at 150 and 250 epochs. The models are trained for total 300 epochs.
For the fine-grained classification on CUB-200 dataset, we use the models pretrained on ImageNet (Russakovsky et al. (2015)) to initialize the weights. We train the models for total 200 epochs with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61, 81 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100.
For the video classification on the UCF-101 dataset, the weights are initialized by the pretrained I3D model on Kinetics dataset (Kay et al. (2017)). We train the models with the initial learning rate 0.1 which is subsequently divided by 10 each 40 epochs. The training stops at the 100 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100.
4.2 ABLATION EXPERIMENT
The number of channels in transferred feature space The nonlocal-based block firstly reduces the channels of original feature mapC1 into the transferred feature spaceCs by the 1×1 convolution to reduce the computation complexity. When Cs is too large, the feature map will contain redundant information which introduces the noise when calculating the affinity matrix A. However, if Cs is too small, it is hard to reconstruct the output feature map due to inadequate features. To test the robustness for the number of the Cs, we generate three types of models with different number of the transferred channels with the setting: “Sub 1” (Cs = C1), “Sub 2” (Cs = C12 ), “Sub 4” (Cs = C1 4 ) as shown in Table. 2. Other parameters of the models and the training steps are the same as the setting in Sec.4.1. Table. 2 shows the experimental results of the three types of models with different nonlocal blocks. Our SNL and gSNL blocks outperforms other models profited by their flexible for learning. Moreover, from Table. 2, we can see that the performances of the CGNL steeply drops when the number of the transferred channels increases. This is because the CGNL block concerns
the relationship between channels, when the number of the sub-channel increases, the relationship between the redundant channels seriously interferes its effects. Overall, our proposed nonlocal block is the most robust for the large number of transferred channels (our model rise 1.1% in Top1 while the best of others only rise 0.4% compared to the baseline).
The stage for adding the nonlocal blocks The nonlocal-based blocks can be added into the different stages of the preResNet (or the ResNet) to form the Nonlocal Net. In Tao et al. (2018), the nonlocalbased blocks are added into the early stage of the preResNet to catch the long-range correlations. Here we experiment the performance of adding different types of nonlocal blocks into the three stages (the first, the second and the third stage of the preResNet) and train the models on CIFAR100 dataset with the same setting discussed in Sec.5.2. The experimental results are shown in Table. 3. We can see that the performances of the NL block is lower than the backbones when adding into the early stage. However, our proposed SNL block has 0.81% improvement compared with the backbone when respectively adding into all the three stages, which is much higher than the other type nonlocal blocks (only 0.42% for the best case).
To intuitively show the stability and robustness of our SNL, we give the spectrum analysis for the estimated weight matrices (Tao et al. (2018)). We extract the self-attention weight matrix: Wg,W of the NL block and the NS block, Wg,W2 of our proposed SNL block. The dimension of the weight matrix satisfies: Wg ∈ RC1×Cs , W ∈ RCs×C1 W2 ∈ RCs×C1 . To make all the eigenvalues real, we let: W̃ = (WgW)+(WgW) T
2 . We do the same to the W2. Figure. 5 shows the top thirtytwo eigenvalues of the weight matrix of W̃ on the models in Table. 3. We can see that the density of the negative eigenvalues is higher than the positive eigenvalues of the NL block when adding into all three stages. This phenomenon makes the NL operator F(A,Z) in Eq. (1) less than zero. So the output feature map is less than the input feature map, i.e. Y < X (more detail of this phenomenon can be seen in Tao et al. (2018)). The NS block can avoid “the damping effect” to some extent by concerning the diffusion nature. However, when adding into the early stage, only six eigenvalues of the nonlocal stage are not equal to zero. This phenomenon makes the nonlocal stage cannot effectively magnify the discriminated feature. Comparing with these two models, our proposed SNL block has more positive eigenvalues which takes effect to enhance the discriminated features and also avoids the “damping effect”.
The number of the nonlocal blocks We test the robustness for adding multiple nonlocal blocks into the backbone network which forms the three type network “Different Position 3 (DP 3)”, “Same Position 3 (SP 3)” “Same Position 5 (SP 5)” as shown in Table. 4. The result are shown in Table. 4. For the model “DP3”, three blocks are added into the stage 1, stage 2, and stage 3 (right after the second residual block). We can see that adding three proposed nonlocal operators into different stages of the backbone generate a larger improvement than the NS operator and NL operator (2.4% improvement). This is because when adding NS and NL into the early stage, these two models cannot better aggregate the low-level features and interfere the following blocks. For the model “SP 3” (“SP 5”), we add three (five) consecutively-connected nonlocal blocks into the stage 1. Note that different from the experiment in Tao et al. (2018) and Wang et al. (2018), these consecutivelyconnected nonlocal blocks have the same affinity matrix. From Table. 4, we can see that profited by concerning the stable hypothesis discussed in Sec 3.3, our gSNL outperform all other models when adding consecutively-connected nonlocal blocks (rises average 0.72% to the backbone and 0.41% higher than the best performance of other type nonlocal blocks) and has a relatively stable performance. However, one drawback is that our gSNL may interfere the learning when adding only one nonlocal block (the stable hypothesis is not satisfied).
4.3 MAIN RESULTS
We test the networks with the Nonlocal Block (NL), the Nonlocal Stage (NS), the Compact Generalized Nonlocal block (CGNL), the Double Attention Block (A2) and our SNL (gSNL) blocks in the different visual learning tasks. The experiment settings are discussed in Sec.4.1. Our models outperform other types of the nonlocal blocks across several standard benchmarks. Table. 5 shows the experimental results on the CIFAR10 dataset, we can see that by adding one proposed block, the Top1 rises about 0.65%, which is higher than adding other type nonlocal blocks (0.3%). As the experiments on CIFAR100 dataset shown in Table. 7, using our proposed block brings improvement about 1.8% with ResNet50. While using a more simple backbone PreResnet56, our model can still generate 1.1% improvement as shown in Table. 6.
Table. 9 shows the experimental results on the fine-grained image classification task on CUB-200 datasets. Our model outperforms other non-channel-concerning blocks and generate (0.42%) im-
provement. Comparing with the channel-wise concerning CGNL block, our model is only a bit lower in Top1. Fig. 4 also shows the visualized feature map which is formed by adding the upsampled feature output with the source image. We can see that the feature maps of our proposed block can cover more critical area of the birds. For example, both the left and right wings (red square) of the birds can be focused profited by the better long-range concerning of our SNL. Moreover, benefited from the flexibility of the W1, our proposed SNL can also catch a relatively large range of the discriminative parts. Table. 8 shows the experimental results on the action recognition task. The network with our proposed block can generate 1.8% improvement than the I3D model and outperforms all other nonlocal models on the UCF-101 dataset.
Table 8: The Results on UCF101
model top1 top5 I3D 81.57% 95.40% + NL 81.37% 95.76% + NS 82.50% 95.84% + A2 82.68% 95.85% + CGNL 83.16% 96.16 % + *SNL 82.30% 95.56% + *gSNL 83.21% 96.53%
Table 9: The Results on CUB
model top1 top5 R-50 85.43% 96.70% + NL 85.34% 96.77% + NS 85.54% 96.56% + A2 86.02% 96.56% + CGNL 86.14% 96.34% + *SNL 85.91% 96.65% + *gSNL 85.95% 96.79%
5 CONCLUSION
In this paper, we explain the nonlocal block in the graph view and propose the spectral nonlocal (SNL) block which is more robust and well-behaved. Our SNL block is a generalized version of the NL and NS block and having more liberty for the parameter learning. We also give the stable hypothesis for deeper nonlocal structure and extend the SNL to gSNL that can be applied to the deeper nonlocal structures. The experiments on multiple computer vision tasks show the high robustness and performance of our proposed nonlocal block. Feature works will focus on using the SNL block into different vision task and its roubustness for the other type of neural network such as the Generative Adversarial Networks (GAN).
A ANALYTIC SOLUTION OF THE CHEBYSHEV APPROXIMATE
Here we give the analytic solution for the coefficients in Chebyshev polynomials (Phillips (2003)):
Theorem 2. Giving a function f(x), x = {x1, x2, ..., xN}, it can be optimally approximated by Chebyshev polynomials: f(x) ≈ ∑K−1 k=0 akTk(x), only when ak satisfies: ak = 2 N ∑N j=0 f(xj)Tk(xj). We call the ak as the analytic solution of the Chebyshev coeffcients.
Based on these theorem, we can get the analytic solution of the parameter θ for Eq. (7):
Lemma 2. The spectral nonlocal operator can be best approximated when the function g(λ) = ω can be best approximated by the Chebyshev polynomials, i.e. the analytic solutions of the Chebyshev coeffcients satisfy:
θk = ak = 2
N N∑ j=0 g(λj)Tk(λj) = 2 N N∑ j=0 ωjTk(λj) (12)
B THE PIECEWISE CHEBYSHEV POLYNOMIALS
Taking Ak = A into the Chebyshev polynomials of the affinity matrix A, the Chebyshev polynomials becomes:
T0(A) = I
T1(A) = A
T2(A) = 2AT1(A)− T0(A) = 2AA− I = 2A− I T3(A) = 2AT2(A)− T1(A) = 2A(2A− I)−A = A T4(A) = 2AT3(A)− T2(A) = 2AA− 2A + I = I = T0(A) T5(A) = 2AT4(A)− T3(A) = 2AI−A = A = T1(A) T6(A) = 2AT5(A)− T4(A) = 2 ∗ T2(A)− T1(A) = T2(A)
(13)
This cyclic form of Chebshev polynomials Tk(A) can be reformulated as a piecewise function:
Tk(A) =
{ I k%4 = 0
A k%4 = 1 || k%4 = 3 2A− I k%4 = 2
(14)
C EXPERIMENT OF SEMANTIC SEGMENTATION ON VOC2012 DATASET
For the semantic segmentation tasks, we generate experiment on the VOC2012 dataset with the model proposed by Chen et al. (2017).We add different types of nonlocal blocks on right before the last residual block in res4 of the ResNet50. The models are trained for 50 epochs with the SGD optimize algorithm. The learning rate is set 0.007 with the weight decay 5e− 4 and momentum 0.9. Experimental results show that the model with our proposed block can the best results.
D THE EXAMPLE OF THE AFFINITY MATRIX ON CUB DATASETS
Experiments to verify the stable hypothesis is also generated on the CUB datasets, we add three consecutively-connected SNL blocks (and NS blocks) into the ResNet50 (right before the last residual block of res4) and train this model on the train set of the CUB dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61 and 81 epochs (total 200 epochs). A weight decay 1e− 4 and momentum 0.9 are also used. Figure. 6 shows the histogram of the strength statistics of the affinity matrix A. We can see that although using different backbone and dataset, the distribution of the k-hop affinity matrixes are corresponded with the experiments on CIFAR100.
E EXPERIMENTS ON VIDEO-BASED PERSON RE-IDENTIFICATION
Experiments are also conducted on the challenging datasets on Video-based Person Re-identification task including the Mars, ILID-SVID and PRID2011. For the backbone, we follow the strategy of Gao & Nevatia (2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatialtemporal features. Note that the models are totally trained on ilidsvid and prid2011 rather than finetuning the pre-trained model on Mars dataset. The experimental results are shown in Table.11, 12, 13. We can see that in these datasets, our proposed block can still generate consistent improvements.
Table 11: The Results on Mars dataset
model mAP Rank1 RTMta 77.70% 79.10% + NL 72.90% 80.90% + *SNL 74.00% 81.98% RTMtp 75.70% 82.30% + NL 75.54% 83.40% + *SNL 76.80% 99.92%
Table 12: The Results on ILIDSVID dataset
model mAP Rank1 RTMta 69.70% 58.70% + NL 66.30% 56.00% + *SNL 79.40% 70.00% RTMtp 81.60% 74.70% + NL 83.00% 75.30% + *SNL 84.80% 76.60%
Table 13: The Results on PRID2011 dataset
model mAP Rank1 RTMta 86.60% 79.80% + NL 90.70% 85.40% + *SNL 91.50% 86.50% RTMtp 90.50% 86.50% + NL 89.70% 85.40% + *SNL 92.40% 88.80%
F ADDITIONAL EXPERIMENTS ON ACTION CLASSIFICATION
Ours SNL can also improve the performance of other network structures such as the Pseudo 3D Convolutional Network (P3D) (Qiu et al. (2017)), the Motion-augmented RGB Stream (MARS) (Crasto et al. (2019)), the Slow-Fast Network (Slow-Fast) (Feichtenhofer et al. (2019)) and the Video Transformer Network (VTN) (Kozlov et al. (2019)). For P3D and MARS, our SNL block is inserted right before the last residual layer of the res3. For the Slow-Fast, we replace its original NL block with our SNL block. For the VTN, we replace its multi-head self-attention blocks (paralleledconnected NL blocks) with our SNL blocks. The Slow-Fast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. From Table. 14, We can see that all the performances are improved when adding our proposed SNL model.
Experiments on Kinetics-400 dataset are also given in Table. 15. We can see that inserting SNL block into the Slow-Fast Network can generate 2.1% improvement. | 1. What are the reviewer's concerns regarding the presentation of the paper?
2. What are the reviewer's concerns regarding the relevance of the results presented in the paper?
3. Are there any questions or points that the reviewer finds unclear or confusing in the paper? If so, what are they? | Review | Review
I have two general concerns, the first is related to the presentation and the second to the relevance of the results.
(1) Presentation is confusing at many points, for instance:
* It is unclear if theorem in Eq. 4 is original or belongs to Shuman et al 2013. (no proof is given)
* Eq. 8 seems an arbitrary decomposition of the original NonLocal operator that could have been proposed without any reference to the Chebyshev expansion (which, on the other hand, is truncated to 1st order with no extra explanation).
* The point of Fig. 1 and Fig 4 is not clear. Fig. 1 explains how SpectralNonLocal reduces to NonLocal and NonLocalStage, but we can see this from the formulas. I dont see how this discussion on the Ws relates to the regions highlighted in the bird.
The same applies to Fig. 4. What are we supposed to see in Fig. 4 (and, more importantly, why?).
* What is the CFL condition? (is it the Courant-Friedrichs–Lewy sampling condition?). How is that related to the values of Ws. Can we take those arbitrarily small as suggested in that proof?
* The upper limit in the sums after Eq. 10 is unclear.
* First time table 4.2 is cited there is no context to understand it. (actually there is no table labeled as "Table 4.2") Where do we see the different number of NonLocal units?. This is only clear when you arrive and read text of page 9 (but not when cited the first time from page 6).
* Explanation of experiments is a little bit confusing (e.g. what does it mean top1 and top5 in tables?). The only explanation of "top-something" I found in the text has to do with eigenvectors in fig. 5. This also apply to the "topX" in the figures?
(2) Nevertheless, the main concern is the scarce relevance of the results: differences of behavior in all tables are about 1%. Then, what is the real advantage of the proposed modification? |
ICLR | Title
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Abstract
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/ √ T ) convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
N/A
√ T ) convergence rate. The empirical results of attacking Inception
V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
1 INTRODUCTION
Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence such as image classification (Krizhevsky et al., 2012; He et al., 2016a), object detection (Ren et al., 2015; Girshick, 2015), and speech recognition (Mohamed et al., 2012; Bahdanau et al., 2016). However, recent studies show that deep neural networks can be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) – a tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. Soon later this is proved to be not a coincidence: similar phenomena have been observed in other problems such as speech recognition (Carlini et al., 2016), visual QA (Xu et al., 2017), image captioning (Chen et al., 2017a), machine translation (Cheng et al., 2018), reinforcement learning (Pattanaik et al., 2018), and even on systems that operate in the physical world (Kurakin et al., 2016).
Depending on how much information an adversary can access to, adversarial attacks can be classified into two classes: white-box attack (Szegedy et al., 2013; Goodfellow et al., 2015) and black-box attack (Papernot et al., 2016a; Chen et al., 2017c). In the white-box setting, the adversary has full access to the target model, while in the black-box setting, the adversary can only access the input and output of the target model but not its internal configurations. Among the approaches proposed for white-box and black-box attacks, optimization-based methods (Carlini & Wagner, 2017; Chen et al., 2017b;c; Ilyas et al., 2018) are most effective: they usually achieve relatively low distortions and high attack success rates. However, these methods are far from efficient. In the white-box setting, they need to solve constrained optimization problems (Carlini & Wagner, 2017), and are usually significantly slower than Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) or Iterative FGSM (I-FGM) (Kurakin et al., 2016). Applying those methods with one or two examples are fine, yet in the case of attacking hundreds of thousands examples, e.g. in adversarial training (Kurakin et al., 2016; Madry et al., 2018), this is far from satisfactory.
In the black-box setting, it becomes even more severe since they need to make gradient estimations (Chen et al., 2017c). Therefore, a large number of queries are needed for them to perform a successful attack, especially when the data dimension is large. For example, attacking a 299×299×3 Imagenet image may take them hundreds of thousands of queries. This significantly limits their prac-
tical usefulness since they can be easily defeated by limiting the number of queries that an adversary can make to the target model.
In this study, we aim to examine the following questions in this study:
Can we improve the efficiency of the optimization-based attack algorithms? In other words, can we use less time and queries to conduct adversarial attacks?
In this work, we provide an affirmative answer to this question by proposing an efficient FrankWolfe optimization framework for both white-box and black-box attacks. In summary, we make the following main contributions:
• We propose a novel Frank-Wolfe based adversarial attack framework. The white-box attack algorithm is an iterative first-order method which admits the fast gradient sign method (FGSM) as the one-step special case. And the corresponding black-box attack algorithm adopts zeroth-order optimization with two sensing vector options (either from the Euclidean unit sphere or from the standard Gaussian distribution) provided. • We show that the proposed white-box and black-box attack algorithms enjoy an O(1/ √ T )
convergence rate. Also we show that the query complexity of the proposed black-box attack algorithm is linear in data dimension d. • Our empirical results on attacking Inception V3 model with the ImageNet dataset show that
(i) the proposed white-box attack algorithm is more efficient than all the baseline whitebox algorithms evaluated here, and (ii) the proposed black-box attack algorithm is highly efficient and is also the only one algorithm that achieves a 100% attack success rate.
2 RELATED WORK
There is a large body of work on adversarial attacks. In this section, we review the most relevant work in both white-box and black-box attack settings, as well as the non-convex Frank-Wolfe optimization.
White-box Attacks: Szegedy et al. (2013) proposed to use box-constrained L-BFGS algorithm for conducting white-box attacks. Goodfellow et al. (2015) proposed the Fast Gradient Sign Method (FGSM) based on linearization of the network as a simple alternative to L-BFGS. Kurakin et al. (2016) proposed to iteratively perform one-step FGSM (Goodfellow et al., 2015) algorithm and clips the adversarial point back to the distortion limit after every iteration. It is called Basic Iterative Method (BIM) or I-FGM in the literature. Madry et al. (2018) showed that for the L∞ norm case, BIM/I-FGM is equivalent to Projected Gradient Descent (PGD), which is a standard tool for constrained optimization. Papernot et al. (2016b) proposed JSMA to greedily attack the most significant pixel based on the Jacobian-based saliency map. Moosavi-Dezfooli et al. (2016) proposed attack methods by projecting the data to the closest separating hyperplane. Carlini & Wagner (2017) introduced the so-called CW attack by proposing multiple new loss functions for generating adversarial examples. Chen et al. (2017b) followed CW’s framework and use an Elastic Net term as the distortion penalty.
Black-box Attacks: One popular family of black-box attacks (Hu & Tan, 2017; Papernot et al., 2016a; 2017) is based on the transferability of adversarial examples (Liu et al., 2018; Bhagoji et al., 2017), where an adversarial example generated for one DNN may be reused to attack other neural networks. This allows the adversary to construct a substitute model that mimics the targeted DNN, and then attack the constructed substitute model using white-box attack methods. However, this type of attack algorithms usually suffer from large distortions and relatively low success rates (Chen et al., 2017c). To address this issue, Chen et al. (2017c) proposed the Zeroth-Order Optimization (ZOO) algorithm that extends the CW attack to the black-box setting and uses a zeroth-order optimization approach to conduct the attack. Although ZOO achieves much higher attack success rates than the substitute model-based black-box attacks, it suffers from a poor query complexity since its naive implementation requires to estimate the gradients of all the coordinates (pixels) of the image. To improve its query complexity, several approaches have been proposed. For example, Tu et al. (2018) introduces an adaptive random gradient estimation algorithm and a well-trained Autoencoder to speed up the attack process. Ilyas et al. (2018) and Liu et al. (2018) improved ZOO’s query complexity by using Natural Evolutionary Strategies (NES) (Wierstra et al., 2014; Salimans et al., 2017) and active learning, respectively.
Non-convex Frank-Wolfe Algorithms: The Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient method, is an iterative optimization method for constrained optimization problem. Jaggi (2013) revisited Frank-Wolfe algorithm in 2013 and provided a stronger and more general convergence analysis in the convex setting. Yu et al. (2017) proved the first convergence rate for Frank-Wolfe type algorithm in the non-convex setting. Lacoste-Julien (2016) provided the convergence guarantee for Frank-Wolfe algorithm in the non-convex setting with adaptive step sizes. Reddi et al. (2016) further studied the convergence rate of non-convex stochastic Frank-Wolfe algorithm in the finite-sun optimization setting. Very recently, Staib & Jegelka (2017) proposed to use Frank-Wolfe for distributionally robust training (Sinha et al., 2018). Balasubramanian & Ghadimi (2018) proved the convergence rate for zeroth-order nonconvex Frank-Wolfe algorithm using one-side finite difference gradient estimator with standard Gaussian sensing vectors.
3 METHODOLOGY
3.1 NOTATIONS
Throughout the paper, scalars are denoted by lower case letters, vectors by lower case bold face letters and sets by calligraphy upper cae letters. For a vector x ∈ Rd, we denote the Lp norm of x by ‖x‖p = ( ∑d i=1 x p i )
1/p. Specially, for p = ∞, the L∞ norm of x by ‖x‖∞ = maxdi=1 |θi|. We denote PX (x) as the projection operation of projecting vector x into the set X .
3.2 PROBLEM FORMULATION
According to the attack purposes, attacks can be divided into two categories: untargeted attack and targeted attack. In particular, untargeted attack aims to turn the prediction into any incorrect label, while the targeted attack, which is considerably harder, requires to mislead the classifier to a specific target class. In this work, we follow the literature (Carlini & Wagner, 2017; Ilyas et al., 2018) and focus on the strictly harder targeted attack setting. It is worth noting that our proposed algorithm can be extended to untargeted attack straightforwardly.
Let us define f(·) as the classification loss function of the targeted DNN. For targeted attacks, we aim to learn an adversarial example x that is close enough to the original input xori and can be misclassified to the target class ytar. The corresponding optimization problem 1 is defined as:
minx f(x, ytar)
subject to ‖x− xori‖p ≤ . (3.1)
Evidently, the constraint set X := {x | ‖x − xori‖p ≤ } is a bounded convex set when p ≥ 1. Normally, p = 2 and p =∞ are used to measure the distortions ‖x− xori‖p, resulting in L2 attack model and L∞ attack model respectively. In this work, we study both attack models. In the sequel, since we mainly focus on the targeted attack case, we use f(x) to denote f(x, ytar) for simplicity.
3.3 FRANK-WOLFE WHITE-BOX ATTACKS
Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient descent, is a popular optimization tool for constrained optimization. Different from PGD that first performs gradient descent followed by a projection step at each iteration, Frank-Wolfe algorithm calls a Linear Minimization Oracle (LMO) over the the constraint set X at each iteration, i.e.,
LMO ∈ argmin v∈X 〈v,∇f(xt)〉.
The LMO can be seen as the minimization of the first-order Taylor expansion of f(·) at point xt:
min v∈X
f(xt) + 〈v − xt,∇f(xt)〉.
By calling LMO, Frank Wolfe solves the linear problem in X and then perform weighted average with previous iterate to obtain the final update formula.
We present our proposed Frank-Wolfe white-box attack algorithm in Algorithm 1, which is built upon the original Frank-Wolfe algorithm. The key difference between Algorithm 1 and the standard Frank-Wolfe algorithm is in Line 4, where the LMO is called over a slightly relaxed constraint set
1Note that there is usually an additional constraint on the input variable x, e.g., x ∈ [0, 1]n for normalized image inputs.
Xλ := {x | ‖x−xori‖p ≤ λ } with λ ≥ 1, instead of the original constraint set X . When λ = 1, set Xλ reduces to X , and Algorithm 1 reduces to standard Frank Wolfe. We argue that this modification makes our algorithm more general, and gives rise to better attack results. Algorithm 1 Frank-Wolfe White-box Attack Algorithm
1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: vt = argminv∈Xλ〈v,∇f(xt)〉 // LMO 5: dt = vt − xt 6: xt+1 = xt + γtdt 7: if λ > 1 then 8: xt+1 = PX (xt+1) 9: end if
10: end for 11: output: xT
The LMO solution itself can be expensive to obtain in general. Fortunately, applying Frank-Wolfe to solve (3.1) actually gives us a closed-form LMO solution. We provide the solutions of LMO (Line 4 in Algorithm 1) for L2 norm and L∞ norm cases respectively:
vt = − λ · ∇f(xt) ‖∇f(xt)‖2 + xori, (L2 norm)
vt = −λ · sign(∇f(xt)) + xori. (L∞ norm)
The derivation can be found in the supplemental materials.
Note that when T = 1, λ = 1, substituting the above LMO solutions into Algorithm 1 yields the final update of x1 = x0 − γt · ∇f(xt), which reduces to FGSM 2 when γt = 1. Similar derivation also applies to L2 norm case. Therefore, just like PGD, our proposed Frank-Wolfe white-box attack also includes FGSM (FGM) as a one-step special instance.
3.4 FRANK-WOLFE BLACK-BOX ATTACKS
Next we consider the black-box setting, where we cannot perform back-propagation to calculate the gradient of the loss function anymore. Instead, we can only query the DNN system’s outputs with specific inputs. To clarify, here the output refers to the logit layer’s output (confidence scores for classification), not the final prediction label. The label-only setting is doable under our framework, but will incur extra difficulty such as designing new loss functions. For simplicity, here we consider the confidence score output.
We propose a zeroth-order Frank-Wolfe based algorithm to solve this problem. Algorithm 2 show our proposed Frank-Wolfe black-box attack algorithm. The key difference between our proposed black-box attack and white-box attack is one extra gradient estimation step, which is presented in Line 4 in Algorithm 2. Also note that for the final output, we provide two options. While option II is the common choice in practice, option I is also provided for the ease of theoretical analysis.
As many other zeroth-order optimization algorithms (Shamir, 2017; Flaxman et al., 2005), Algorithm 3 uses symmetric finite differences to estimate the gradient and therefore, gets rid of the dependence on back-propagation in white-box setting. Different from Chen et al. (2017c), here we do not utilize natural basis as our sensing vectors, instead, we provide two options: one is to use vectors uniformly sampled from Euclidean unit sphere and the other is to use vectors uniformly sampled from standard multivarite Gaussian distribution. This will greatly improve the gradient estimation efficiency comparing to sensing with natural basis as such option will only be able to estimate one coordinate of the gradient vector per query. In practice, both options here provide us competitive experimental results. It is worth noting that NES method (Wierstra et al., 2014) with antithetic sampling (Salimans et al., 2017) used in Ilyas et al. (2018) yields similar formula as our Option II in Algorithm 3.
2The extra clipping operation in FGSM is to project to the additional box constraint for image classification task. We will also need this clipping operation at the end of each iteration for specific tasks such as image classification.
Algorithm 2 Frank-Wolfe Black-box Attack Algorithm 1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori, target label ytar; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: qt = ZERO ORD GRAD EST(xt) // Algorithm 3 5: vt = argminv∈Xλ〈v,qt〉 6: dt = vt − xt 7: xt+1 = xt + γtdt 8: if λ > 1 then 9: xt+1 = PX (xt+1) 10: end if 11: end for 12: Option I: xa is uniformly random chosen from {xt}Tt=1 13: Option II: xa = xT 14: output: xa
Algorithm 3 Zeroth-Order Gradient Estimation (ZERO ORD GRAD EST) 1: parameters: number of gradient estimation samples b, sampling parameter δt; 2: q = 0 3: for i = 1, . . . , b do 4: Option I: Sample ui uniformly from the Euclidean unit sphere with ‖ui‖2 = 1
q = q+ d2δtb ( f(xt + δtui)− f(xt − δtui) ) ui
5: Option II: Sample ui uniformly from the standard Gaussian distribution N (0, I) q = q+ 12δtb ( f(xt + δtui)− f(xt − δtui) ) ui 6: end for 7: output: q
4 MAIN THEORY
In this section, we establish the convergence guarantees for our proposed Frank-Wolfe adversarial attack algorithms described in Section 3. First, we introduce the convergence criterion for our FrankWolfe adversarial attack framework.
4.1 CONVERGENCE CRITERION
The loss function for common DNN models are generally nonconvex. In addition, (3.1) is a constrained optimization. For such general nonconvex constrained optimization, we typically adopt the Frank-Wolfe gap as the convergence criterion (since gradient norm of f is no longer a proper criterion for constrained optimization problems):
g(xt) = max x∈X 〈x− xt,−∇f(xt)〉.
Note that for the Frank-Wolfe gap, we always have g(xt) ≥ 0 and xt is a stationary point for the constrained optimization problem if and only if g(xt) = 0. Also the Frank-Wolfe gap is affine invariant and do not tie to any specific choice of norm, which makes itself a perfect convergence criterion for Frank-Wolfe based algorithms.
4.2 CONVERGENCE GUARANTEE FOR FRANK-WOLFE WHITE-BOX ATTACK
Before we are going to provide the convergence guarantee of Frank-Wolfe white-box attack (Algorithm 1), we introduce the following assumptions that are essential to the convergence analysis.
Assumption 4.1. Function f(·) is L-smooth with respect to x, i.e., for any x,x′, it holds that
f(x′) ≤ f(x) +∇f(x)>(x′ − x) + L 2 ‖x′ − x‖22.
Assumption 4.1 is a standard assumption in nonconvex optimization, and is also adopted in other Frank-Wolfe literature such as Lacoste-Julien (2016); Reddi et al. (2016). Note that even though the smoothness assumption does not hold for general DNN models, a recent study (Santurkar et al., 2018) shows that batch normalization that is used in many modern DNNs such as Inception V3
model, actually makes the optimization landscape significantly smoother 3. This justifies the validity of Assumption 4.1.
Assumption 4.2. Set X is bounded with diameter D, i.e., ‖x− x′‖2 ≤ D for all x,x′ ∈ X .
Assumption 4.2 implies that the input space is bounded. For common tasks such as image classification, given the fact that images have bounded pixel range and is a small constant, this assumption trivially holds.
Now we present the theorem, which characterizes the convergence rate of our proposed Frank-Wolfe white-box adversarial attack algorithm presented in Algorithm 1.
Theorem 4.3. Under Assumptions 4.1 and 4.2, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ), denote g̃T = min1≤k≤T g(xk) where {xk}Tk=1 are iterates in Algorithm 1 with λ = 1, we have:
g̃T ≤ √ LD2(f(x0)− f(x∗))
2T ,
where x∗ is the optimal solution to (3.1).
Remark 4.4. Theorem 4.3 suggests that our proposed Frank-Wolfe white-box attack algorithm achieves aO(1/ √ T ) rate of convergence. Note that similar result has been proved in Lacoste-Julien (2016) under a different choice of step size.
4.3 CONVERGENCE GUARANTEE FOR FRANK-WOLFE BLACK-BOX ATTACK
Next we analyze the convergence of our proposed Frank-Wolfe black-box adversarial attack algorithm presented in Algorithm 2.
In order to prove the convergence of our proposed Frank-Wolfe black-box attack algorithm, we need the following additional assumption that ‖∇f(0)‖2 is bounded. Assumption 4.5. Gradient of f(·) at zero point∇f(0) satisfies maxy ‖∇f(0)‖2 ≤ Cg .
Following the analysis in Shamir (2017), let fδ(x) = Eu[f(x+δu)], which is the smoothed version of f(x). This smoothed function value plays a central role in our theoretical analysis, since it bridges the finite difference gradient approximation with the actual gradient. The following lemma shows this relationship.
Lemma 4.6. For the gradient estimator qt in Algorithm 3, its expectation and variance satisfy
E[qt] = ∇fδ(xt), E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
Now we are going to present the theorem, which characterizes the convergence rate of Algorithm 2. Theorem 4.7. Under Assumptions 4.1, 4.2 and 4.5, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ),
b = Td and δt = √ 2/Td2, suppose we use Option I in Algorithm 2 and option II for Algorithm 3, then the output xa from Algorithm 2 with λ = 1 satisfies:
E[g(xa)] ≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where x∗ is the optimal solution to (3.1). Remark 4.8. Theorem 4.7 suggests that Algorithm 2 also enjoys a O(1/ √ T ) rate of convergence. In terms of query complexity, the total number of queries needed is Tb = T 2d, which is linear in the data dimension d. In fact, in the experiment part, we observed that this number can be substantially smaller than d, e.g., b = 25, which is much lower than the theorem suggests. Note that although we only prove for option I in Algorithm 3, our result can be readily extended to Option II (the Gaussian sensing vector case).
3The original argument in Santurkar et al. (2018) refers to the smoothness with respect to each layer’s parameters. Note that the first layer’s parameters are in the mirror position (in terms of backpropagation) as the network inputs. Therefore, the argument in Santurkar et al. (2018) can also be applied here with respect to the network inputs.
5 EXPERIMENTS
In this section, we present the experimental results for our proposed Frank-Wolfe attack framework against other state-of-the-art adversarial attack algorithms in both white-box and black-box settings. All of our experiments are conducted on Amazon AWS p3.2xlarge servers which come with Intel Xeon E5 CPU and one NVIDIA Tesla V100 GPU (16G RAM). All experiments are implemented in Tensorflow platform version 1.10.0 within Python 3.6.4.
5.1 EVALUATION SETUP AND METRICS
We test the attack effectiveness of all algorithms by evaluating on a pre-trained Inception V3 model (Szegedy et al., 2016) and a ResNet V2 50 (He et al., 2016b) model that are trained on ImageNet dataset (Deng et al., 2009). The pre-trained Inception V3 model is reported to have a 78.0% top-1 accuracy and a 93.9% top-5 accuracy. The pre-trained ResNet V2 model is reported to have a 75.6% top-1 and a 92.8% top-5 accuracy. We randomly choose 500 images from the ImageNet validation set that are verified to be correctly classified by the pre-trained model and also randomly choose a target class for each image. Each image has a dimension of 299 × 299 × 3 and we test all attack algorithms through the same randomly chosen data samples and target labels.
We test for both L2 norm based and L∞ norm based attacks. In the white-box setting, we perform binary search / grid search for the best distortion parameter ( in our formulation and c in CW’s regularized formulation). In the black-box setting, for L2 norm based attack, we set = 5 and for L∞ based attack, we set = 0.05. For white-box attack, we restrict a maximum of 1, 000 iterations per attack for each method. And for black-box attack, we set a maximum query limit of 500, 000 per attack per image for each method.
For all algorithms, we stop the algorithm when a successful attack is found. For our proposed blackbox attack, we use option II in Algorithm 2 and test both options in Algorithm 3. We set the number of gradient estimation samples b = 25 for Algorithm 2. More detailed description on parameter settings can be found in the supplemental materials.
We evaluate the final performance through attack success rate where the success is defined as making the classifier output the exact target class label (not any incorrect labels). We also measure average attack time per image, average distortion (only on successful attacked samples) and average number of queries needed (only for black-box attack) per image. For a fair time comparison, even though some of the algorithms including ours can be written in batch form (attack multiple images at one time), all algorithms are set to attack one image at a time.
Due to page limit, we leave all experimental results on ResNet V2 model in the supplemental materials.
5.2 BASELINE METHODS
We compare the proposed algorithms with several state-of-the-art baseline algorithms. Specifically, we compare the proposed white-box attack algorithm with 4 (i) PGD (Madry et al., 2018) (which is essentially I-FGM (Kurakin et al., 2016)), (ii) CW attack (Carlini & Wagner, 2017) and (iii) EAD attack (Chen et al., 2017b). We compare the proposed black-box attack algorithm with (i) ZOO attack (Chen et al., 2017c) and (ii) NES-PGD attack (Ilyas et al., 2018).
5.3 WHITE-BOX ATTACK EXPERIMENTS
In this subsection, we present the white-box attack experiments on Inception V3 model. Tables 1 and 2 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. As we can observe from the tables, the attack success rate is 100% for every method. For the other baselines in theL2 norm case, CW method achieves the smallest average distortion, yet it comes with an expansive time cost. EAD method does not have either time advantage or distortion advantage in this experiment, probably due to its different motivation in attacking. PGD has moderate average distortion, yet it also costs quite some time to finish the attack. On the other hand, our proposed algorithm achieves the shortest attack time with moderate distortion. It significantly reduces the time complexity needed for attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either.
4We did not compare with FGM (FGSM) (Goodfellow et al., 2015) since it basically has zero success rate for targeted attack on Inception V3 or ResNet V2 models.
This is largely due to the original CW was designed for L2 norm attack, and in order to apply it to L∞ norm attack, special design is needed, which sacrifices its performance in terms of runtime. Again, our proposed white-box attack algorithm achieves the shortest average attack time and a moderate average distortion.
In Figure 1, we also examine the effect of λ in our proposed Frank-Wolfe white-box attack algorithm. We plot the objective loss function value of attacking one example against the number of iterations for both L2 and L∞ based white-box attack on Inception V3 model. From the plot, we can see that larger λ indeed leads to faster convergence.
5.4 BLACK-BOX ATTACK EXPERIMENTS
In this subsection, we present the black-box attack experiments on Inception V3 model. For blackbox attacks, attack success rate, time and number of queries needed are more meaningful evaluation metrics than distortion distances. Therefore, we omit all the grid search / binary search steps that are used in the white-box setting since extra time / queries are needed for finding parameters that can obtain better distortion distances.
Tables 3 and 4 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. For ZOO method, note that it only has the L2 norm version and it follows CW’s framework and thus uses different loss function and problem formulation (cannot exactly control the adversarial example to be within the distortion limit, we manage to keep the average distortion around for ZOO while other methods have average distortions very close to ). Furthermore, we can observe that ZOO is quite slow in this task. Attack on a single image can take up to 2 hours for ZOO and it is only able to achieve a 74.8% success rate (compared with the 88.9% success rate in the original paper, we think the main reason is the query limit here is only half of the query limit
in the original paper). NES-PGD method, while greatly improving ZOO’s performance, still cannot achieve 100% success rate in both attack models and takes relatively more time and queries. In sharp contrast, our proposed Frank-Wolfe black-box attacks (both option I and option II) achieve the highest success rate in both L2 norm and L∞ norm based black-box attacks and further largely improve the attack efficiency.
Figure 2 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attacks on Inception V3 model. As we can see from the plot, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the highest attack success rate and best efficiency (least queries needed for achieving the same success rate), especially in the L2 norm case.
6 CONCLUSIONS
In this work, we propose a Frank-Wolfe framework for efficient and effective adversarial attacks. Our proposed white-box and black-box attack algorithms enjoy an O(1/ √ T ) rate of convergence, and the query complexity of the proposed black-box attack algorithm is linear in data dimension d. Finally, our empirical study on attacking Inception V3 model with ImageNet dataset yields a 100% attack success rate for our proposed algorithms, even in the setting of black-box attack.
A LINEAR MINIMIZATION ORACLE (LMO) SOLUTIONS
Denote u = (v − xori)/(λ ), the linear minimization problem can be written as
min ‖v−xori‖p≤λ 〈v,∇f(xt)〉 = min ‖u‖p≤1 λ · 〈u,∇f(xt)〉
= max ‖u‖p≤1
λ · 〈u,−∇f(xt)〉
= λ · ‖∇f(xt)‖p∗,
where ‖ · ‖p∗ denotes the dual norm of ‖ · ‖p. For p = 2 case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖2.
It immediately implies that
v = −λ · ∇f(xt) ‖∇f(xt)‖2 + xori.
For p =∞ case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖1.
It immediately implies that
v = −λ · sign(∇f(xt)) + xori.
For the ease of comparison, we show the full update formula (before final projection step) for our algorithm. In detail, for p =∞ case, our algorithm takes the following update formulate:
xt+1 = (1− γt)xt + γtvt = (1− γt)xt − λγt · sign(∇f(xt)) + γt · xori = xt − λγt · sign(∇f(xt))− γt(xt − xori),
and for p = 2 case, it takes
xt+1 = xt − λγt · ∇f(xt) ‖∇f(xt)‖2 − γt(xt − xori).
Compared with PGD, the full update (before final projection step) of Frank-Wolfe white-box attack includes an extra parameter λ before the normalized gradient, as well as an extra term (xt − xori). This difference makes the behavior of Frank-Wolfe based attacks different from that of PGD based attacks.
B PROOF OF THE MAIN THEORY IN SECTION 4
B.1 PROOF OF THEOREM 4.3
Proof. For simplicity, we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2 ,
where the last inequality uses the bounded domain condition in Assumption 4.2. Note that by definition of the Frank-Wolfe gap, we have
f(xt+1) ≤ f(xt)− γg(xt) + LD2γ2
2 .
Summation over t of the above inequality, we obtain
f(xT ) ≤ f(x0)− T−1∑ k=0 γg(xk) + TLD2γ2 2
≤ f(x0)− γT g̃T + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃t. Note that by optimality we easily have f(xt+1) ≥ f(x∗). Rearrange the above inequality we have
g̃T ≤ f(x0)− f(x∗) Tγ + LD2γ 2
≤ √ LD2(f(x0)− f(x∗))
2T , where the second inequality is achieved when γ = √ 2(f(x0)− f(x∗))/(LD2T ).
B.2 PROOF OF LEMMA 4.6
Proof. For simplicity we denote f(·) by f(·) for the rest of the proof. Let us denote ψi = d 2δtb ( f(xt + δtui)− f(xt − δtui) ) ui. For the first part, we have
E[ψi] = Eu [ d
2δtb
( f(xt + δtui)− f(xt − δtui) ) ui ] = Eu [ d
2δtb f(xt + δtui)ui
] + Eu [ d
2δtb f(xt − δtui)(−ui) ] = Eu [ d
δtb f(xt + δtui)ui ] = 1
b ∇fδ(xt),
where the third equality holds due to symmetric property of ui and the last equality follows from Lemma 4.1(a) in Gao et al. (2018). Therefore, we have
E[qt] = E [ b∑ i=1 ψi ] = ∇fδ(xt).
For second part, note that ψi’s are independent from each other due to the independence of ui, we have
E‖qt − E[qt]‖22 = E ∥∥∥∥ b∑ i=1 [ ψi − Eψi ]∥∥∥∥2 2 = b∑ i=1 E ∥∥ψi − Eψi∥∥2 ≤ b∑ i=1 E ∥∥ψi∥∥2.
Now take a look at E ∥∥ψi∥∥2:
E ∥∥ψi∥∥2 = Eu∥∥∥∥ d2δtb(f(xt + δtui)− f(xt) + f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
≤ 1 2b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2 + 1 2b2 Eu ∥∥∥∥ dδt (f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
= 1 b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2
≤ 1 b2
( 2d‖∇f(xt)‖22 + 1
2 δ2tL
2d2 ) ,
where the first inequality is due to the fact that (a + b)2 ≤ 2a2 + 2b2, the second equality follows from the symmetric property of ui and the last inequality is by Lemma 4.1(b) in Gao et al. (2018). Also note that by Assumption 4.1 and 4.5 we have
‖∇f(xt)‖22 ≤ (‖∇f(0))‖2 + L‖xt‖2)2 ≤ (Cg + LD)2.
Combine all above results, we obtain
E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
B.3 PROOF OF THEOREM 4.7
Proof. For simplicity we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2
= f(xt) + γq > t (vt − xt) + γ(∇f(xt)− qt)>(vt − xt) +
LD2γ2
2 ,
where the second inequality uses the bounded domain condition in Assumption 4.2. Now define an auxiliary quantity:
v̂t = argmin v∈X
〈v,∇f(xt)〉.
According to the definition of g(xt), this immediately implies
g(xt) = 〈v̂t,∇f(xt)〉. Then we further have
f(xt+1) ≤ f(xt) + γq>t (v̂t − xt) + γ(∇f(xt)− qt)>(vt − xt) + LD2γ2
2
= f(xt) + γ∇f(xt)>(v̂t − xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
= f(xt)− γg(xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
≤ f(xt)− γg(xt) + γD · ‖∇f(xt)− qt‖2 + LD2γ2
2 ,
where the first inequality follows from the optimally of vt in Algorithm 2 and the last inequality holds due to Cauchy-Schwarz inequality. Take expectations for both sides of the above inequality, we have
E[f(xt+1)]
≤ E[f(xt)]− γE[g(xt)] + γD · E‖∇f(xt)− qt‖2 + LD2γ2
2 ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + E‖qt − E[qt]‖2 ) + LD2γ2
2 , ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + √ E‖qt − E[qt]‖22 ) + LD2γ2
2
≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)−∇fδ(xt)‖2 + √ 4d(Cg + LD)2 + δ2tL 2d2
2b
)
+ LD2γ2
2 ,
≤ E[f(xt)]− γE[g(xt)] + γD ·
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + LD2γ2
2 ,
where the second inequality follows from triangle inequality, the third inequality is due to Jenson’s inequality and the last inequality holds due to Lemma 4.6.
Summation over t of the above inequality, we obtain
E[f(xT )]
≤ f(x0)− T−1∑ t=0 γE[g(xt)] + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2
≤ f(x0)− γTga + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃a. Note that by the zeroth-order optimality, we have f(xt+1) ≥ f(x∗). Rearrange the above inequality we obtain
E[ga] ≤ f(x0)− f(x∗) Tγ + LD2γ 2 +D
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
)
≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where the second inequality is achieved by setting γ = √ 2(f(x0)− f(x∗))/(LD2T ), b = Td and
δt = √ 2/Td2.
C PARAMETERS SETTINGS FOR SECTION 5
For Frank-Wolfe white-box attack algorithm, we list the parameters we use in Section 5 at Table 5.
Similarly, for Frank-Wolfe black-box attack algorithm, we also list the parameters we use in Section 5 at Table 6.
We also list the hyperparameters we use for baseline algorithms. Specifically, for PGD, we set a step size of 0.05 for L2 case and 0.01 for L∞ case. For CW, we set a step size of 0.002 for L2 case step size of 0.005 for L∞ case. The confidence is set to 0 and we perform 10 times binary search for the constant starting from 0.01 (L2 case) and 0.001 (L∞ case). For EAD, we use a step size of 0.01 and the same binary search strategy as CW and β is set to be 0.001. In terms of black-box experiments, for ZOO, we set a step size of 0.01 and the initial constant is set to be 1 without binary search to achieve better query complexity. For NES-PGD, we set a step size of 0.3 for L2 case and 0.01 for L∞ case.
D ADDITIONAL EXPERIMENTS
D.1 RESNET V2 WHITE-BOX ATTACK RESULTS
In this subsection, we present the white-box attack experiments on ResNet V2 model. Tables 7 and 8 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. For the other baselines in the L2 norm case, surprisingly, CW method cannot achieve the best L2 distortion as it does in Inception V3 model. EAD method is relatively faster than CW in terms of attack time yet it has the largest distortion and a quite low success rate of 73.0%. PGD has the smallest average distortion in this setting, yet it also costs a lot of attack time. On the other hand, our proposed algorithm achieves the highest attack success rate within very short attack time with very small distortion. It significantly reduces the time complexity needed for effective attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either. Our proposed white-box attack algorithm, on the other hand, again achieves the shortest average attack time and 100% success rate.
D.2 RESNET V2 BLACK-BOX ATTACK RESULTS
In this subsection, we present the black-box experiments on ResNet V2 model. We again mainly focus on evaluating attack success rate, time and number of queries needed. In previous experiments on Inception V3 model, we show the performance of different black-box attack algorithms given enough number of queries (i.e., 500,000 per attack per image). And it shows that basically all
algorithms can achieve very high attack success rate (almost 100%). Now we examine a much harder case, where we reduce the the number of allowed queries per attack per image to only 50, 000. Tables 9 and 10 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. We still set = 5 for L2 case and = 0.05 for L∞ case.
For the L2 norm case, ZOO method barely succeeds due to the strict query limit of 50, 00 while it typically requires over 106 queries to attack successfully. Our proposed Frank-Wolfe black-box attacks, on the other hand, achieve nearly 60% attack success rate under such a stringent query budget. For the L∞ norm case, both NES-PGD method and ours achieve over 90% success rate. Even though they share similar average attack time and average number of queries needed, our Frank-Wolfe based methods still achieve the best in terms of all three evaluation metrics.
Figure 3 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attack on ResNet V2 model. Note that here we have a query limit of 50, 000, which is especially hard for the L2 norm case. As we can see from the Figure 3, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the best performance (highest attack success rate and smallest queries needed for achieving the same success rate).
D.3 VISUALIZATION EXAMPLES
For the completeness, we also provide some visual illustrations on the adversarial examples generated by various algorithms. Figure 4 shows some adversarial examples generated through different L2 norm based white-box attacks. Figure 5 shows some adversarial examples generated through different L∞ norm based black-box attacks. | 1. What are the motivations behind using Frank-Wolfe inspired method?
2. Why did the authors choose not to compare their method with simple projected gradient method?
3. What is the difference between the result of Theorem 4.3 and the result from Lacoste-Julien 2016?
4. How were the hyperparameters of CW and EAD set in the experiment?
5. Is there any new result in the paper except for Theorem 4.7?
6. What is the purpose of working with $y$ instead of $y_{tar}$?
7. Is there any missing expectation on g(x_a) in Theorem 4.7?
8. Are there any minor comments or suggestions for improving the paper's clarity? | Review | Review
This paper provide a method to produce adversarial attack using a Frank-Wolfe inspired method.
I have some concerns about the motivation of this method:
- What are the motivations to use Frank-Wolfe ? Usually this algorithm is used when the constraints are to complicated to have a tractable projection (which is not the case for the L_2 and L_\infty balls) or when one wants to have sparse iterates which do not seem to be the case here.
- Consequently why did not you compare simple projected gradient method ? (BIM) is not equivalent to the projected gradient method since the direction chosen is the sign of the gradient and not the gradient itself (the first iteration is actually equivalent because we start at the center of the box but after both methods are no longer equivalent).
- There is no motivations for the use of $\lambda >1$ neither practical or theoretical since the results are only proven for $\lambda =1$ whereas the experiments are done with \lambda = 5,20 or 30.
- What is the difference between the result of Theorem 4.3 and the result from (Lacoste-Julien 2016)?
Depending on the answer to these questions I'm planning to move up or down my grade.
In the experiment there is no details on how you set the hyperparameters of CW and EAD. They use a penalized formulation instead of a constrained one. Consequently the regularization hyperparameters have to be set differently.
The only new result seem to be Theorem 4.7 which is a natural extension to theorem 4.3 to zeroth-order methods.
Comment:
- in the whole paper there is $y$ which is not defined. I guess it is the $y_{tar}$ fixed in the problem formulation Sec 3.2. In don't see why there is a need to work on any $y$. If it is true, case assumption 4.5 do not make any sense since $y = y_{tar}$ (we just need to note $\|\nabla f(O,y_{tar})\| = C_g$) and some notation could be simplified setting for instance $f(x,y_{tar}) = f(x)$.
- In Theorem 4.7 an expectation on g(x_a) is missing
Minor comments:
- Sec 3.1 theta_i -> x_i
- Sec 3.3 the argmin is a set, then it is LMO $\in$ argmin.
===== After rebuttal ======
The authors answered some of my questions but I still think it is a borderline submission. |
ICLR | Title
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Abstract
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/ √ T ) convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
N/A
√ T ) convergence rate. The empirical results of attacking Inception
V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
1 INTRODUCTION
Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence such as image classification (Krizhevsky et al., 2012; He et al., 2016a), object detection (Ren et al., 2015; Girshick, 2015), and speech recognition (Mohamed et al., 2012; Bahdanau et al., 2016). However, recent studies show that deep neural networks can be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) – a tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. Soon later this is proved to be not a coincidence: similar phenomena have been observed in other problems such as speech recognition (Carlini et al., 2016), visual QA (Xu et al., 2017), image captioning (Chen et al., 2017a), machine translation (Cheng et al., 2018), reinforcement learning (Pattanaik et al., 2018), and even on systems that operate in the physical world (Kurakin et al., 2016).
Depending on how much information an adversary can access to, adversarial attacks can be classified into two classes: white-box attack (Szegedy et al., 2013; Goodfellow et al., 2015) and black-box attack (Papernot et al., 2016a; Chen et al., 2017c). In the white-box setting, the adversary has full access to the target model, while in the black-box setting, the adversary can only access the input and output of the target model but not its internal configurations. Among the approaches proposed for white-box and black-box attacks, optimization-based methods (Carlini & Wagner, 2017; Chen et al., 2017b;c; Ilyas et al., 2018) are most effective: they usually achieve relatively low distortions and high attack success rates. However, these methods are far from efficient. In the white-box setting, they need to solve constrained optimization problems (Carlini & Wagner, 2017), and are usually significantly slower than Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) or Iterative FGSM (I-FGM) (Kurakin et al., 2016). Applying those methods with one or two examples are fine, yet in the case of attacking hundreds of thousands examples, e.g. in adversarial training (Kurakin et al., 2016; Madry et al., 2018), this is far from satisfactory.
In the black-box setting, it becomes even more severe since they need to make gradient estimations (Chen et al., 2017c). Therefore, a large number of queries are needed for them to perform a successful attack, especially when the data dimension is large. For example, attacking a 299×299×3 Imagenet image may take them hundreds of thousands of queries. This significantly limits their prac-
tical usefulness since they can be easily defeated by limiting the number of queries that an adversary can make to the target model.
In this study, we aim to examine the following questions in this study:
Can we improve the efficiency of the optimization-based attack algorithms? In other words, can we use less time and queries to conduct adversarial attacks?
In this work, we provide an affirmative answer to this question by proposing an efficient FrankWolfe optimization framework for both white-box and black-box attacks. In summary, we make the following main contributions:
• We propose a novel Frank-Wolfe based adversarial attack framework. The white-box attack algorithm is an iterative first-order method which admits the fast gradient sign method (FGSM) as the one-step special case. And the corresponding black-box attack algorithm adopts zeroth-order optimization with two sensing vector options (either from the Euclidean unit sphere or from the standard Gaussian distribution) provided. • We show that the proposed white-box and black-box attack algorithms enjoy an O(1/ √ T )
convergence rate. Also we show that the query complexity of the proposed black-box attack algorithm is linear in data dimension d. • Our empirical results on attacking Inception V3 model with the ImageNet dataset show that
(i) the proposed white-box attack algorithm is more efficient than all the baseline whitebox algorithms evaluated here, and (ii) the proposed black-box attack algorithm is highly efficient and is also the only one algorithm that achieves a 100% attack success rate.
2 RELATED WORK
There is a large body of work on adversarial attacks. In this section, we review the most relevant work in both white-box and black-box attack settings, as well as the non-convex Frank-Wolfe optimization.
White-box Attacks: Szegedy et al. (2013) proposed to use box-constrained L-BFGS algorithm for conducting white-box attacks. Goodfellow et al. (2015) proposed the Fast Gradient Sign Method (FGSM) based on linearization of the network as a simple alternative to L-BFGS. Kurakin et al. (2016) proposed to iteratively perform one-step FGSM (Goodfellow et al., 2015) algorithm and clips the adversarial point back to the distortion limit after every iteration. It is called Basic Iterative Method (BIM) or I-FGM in the literature. Madry et al. (2018) showed that for the L∞ norm case, BIM/I-FGM is equivalent to Projected Gradient Descent (PGD), which is a standard tool for constrained optimization. Papernot et al. (2016b) proposed JSMA to greedily attack the most significant pixel based on the Jacobian-based saliency map. Moosavi-Dezfooli et al. (2016) proposed attack methods by projecting the data to the closest separating hyperplane. Carlini & Wagner (2017) introduced the so-called CW attack by proposing multiple new loss functions for generating adversarial examples. Chen et al. (2017b) followed CW’s framework and use an Elastic Net term as the distortion penalty.
Black-box Attacks: One popular family of black-box attacks (Hu & Tan, 2017; Papernot et al., 2016a; 2017) is based on the transferability of adversarial examples (Liu et al., 2018; Bhagoji et al., 2017), where an adversarial example generated for one DNN may be reused to attack other neural networks. This allows the adversary to construct a substitute model that mimics the targeted DNN, and then attack the constructed substitute model using white-box attack methods. However, this type of attack algorithms usually suffer from large distortions and relatively low success rates (Chen et al., 2017c). To address this issue, Chen et al. (2017c) proposed the Zeroth-Order Optimization (ZOO) algorithm that extends the CW attack to the black-box setting and uses a zeroth-order optimization approach to conduct the attack. Although ZOO achieves much higher attack success rates than the substitute model-based black-box attacks, it suffers from a poor query complexity since its naive implementation requires to estimate the gradients of all the coordinates (pixels) of the image. To improve its query complexity, several approaches have been proposed. For example, Tu et al. (2018) introduces an adaptive random gradient estimation algorithm and a well-trained Autoencoder to speed up the attack process. Ilyas et al. (2018) and Liu et al. (2018) improved ZOO’s query complexity by using Natural Evolutionary Strategies (NES) (Wierstra et al., 2014; Salimans et al., 2017) and active learning, respectively.
Non-convex Frank-Wolfe Algorithms: The Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient method, is an iterative optimization method for constrained optimization problem. Jaggi (2013) revisited Frank-Wolfe algorithm in 2013 and provided a stronger and more general convergence analysis in the convex setting. Yu et al. (2017) proved the first convergence rate for Frank-Wolfe type algorithm in the non-convex setting. Lacoste-Julien (2016) provided the convergence guarantee for Frank-Wolfe algorithm in the non-convex setting with adaptive step sizes. Reddi et al. (2016) further studied the convergence rate of non-convex stochastic Frank-Wolfe algorithm in the finite-sun optimization setting. Very recently, Staib & Jegelka (2017) proposed to use Frank-Wolfe for distributionally robust training (Sinha et al., 2018). Balasubramanian & Ghadimi (2018) proved the convergence rate for zeroth-order nonconvex Frank-Wolfe algorithm using one-side finite difference gradient estimator with standard Gaussian sensing vectors.
3 METHODOLOGY
3.1 NOTATIONS
Throughout the paper, scalars are denoted by lower case letters, vectors by lower case bold face letters and sets by calligraphy upper cae letters. For a vector x ∈ Rd, we denote the Lp norm of x by ‖x‖p = ( ∑d i=1 x p i )
1/p. Specially, for p = ∞, the L∞ norm of x by ‖x‖∞ = maxdi=1 |θi|. We denote PX (x) as the projection operation of projecting vector x into the set X .
3.2 PROBLEM FORMULATION
According to the attack purposes, attacks can be divided into two categories: untargeted attack and targeted attack. In particular, untargeted attack aims to turn the prediction into any incorrect label, while the targeted attack, which is considerably harder, requires to mislead the classifier to a specific target class. In this work, we follow the literature (Carlini & Wagner, 2017; Ilyas et al., 2018) and focus on the strictly harder targeted attack setting. It is worth noting that our proposed algorithm can be extended to untargeted attack straightforwardly.
Let us define f(·) as the classification loss function of the targeted DNN. For targeted attacks, we aim to learn an adversarial example x that is close enough to the original input xori and can be misclassified to the target class ytar. The corresponding optimization problem 1 is defined as:
minx f(x, ytar)
subject to ‖x− xori‖p ≤ . (3.1)
Evidently, the constraint set X := {x | ‖x − xori‖p ≤ } is a bounded convex set when p ≥ 1. Normally, p = 2 and p =∞ are used to measure the distortions ‖x− xori‖p, resulting in L2 attack model and L∞ attack model respectively. In this work, we study both attack models. In the sequel, since we mainly focus on the targeted attack case, we use f(x) to denote f(x, ytar) for simplicity.
3.3 FRANK-WOLFE WHITE-BOX ATTACKS
Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient descent, is a popular optimization tool for constrained optimization. Different from PGD that first performs gradient descent followed by a projection step at each iteration, Frank-Wolfe algorithm calls a Linear Minimization Oracle (LMO) over the the constraint set X at each iteration, i.e.,
LMO ∈ argmin v∈X 〈v,∇f(xt)〉.
The LMO can be seen as the minimization of the first-order Taylor expansion of f(·) at point xt:
min v∈X
f(xt) + 〈v − xt,∇f(xt)〉.
By calling LMO, Frank Wolfe solves the linear problem in X and then perform weighted average with previous iterate to obtain the final update formula.
We present our proposed Frank-Wolfe white-box attack algorithm in Algorithm 1, which is built upon the original Frank-Wolfe algorithm. The key difference between Algorithm 1 and the standard Frank-Wolfe algorithm is in Line 4, where the LMO is called over a slightly relaxed constraint set
1Note that there is usually an additional constraint on the input variable x, e.g., x ∈ [0, 1]n for normalized image inputs.
Xλ := {x | ‖x−xori‖p ≤ λ } with λ ≥ 1, instead of the original constraint set X . When λ = 1, set Xλ reduces to X , and Algorithm 1 reduces to standard Frank Wolfe. We argue that this modification makes our algorithm more general, and gives rise to better attack results. Algorithm 1 Frank-Wolfe White-box Attack Algorithm
1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: vt = argminv∈Xλ〈v,∇f(xt)〉 // LMO 5: dt = vt − xt 6: xt+1 = xt + γtdt 7: if λ > 1 then 8: xt+1 = PX (xt+1) 9: end if
10: end for 11: output: xT
The LMO solution itself can be expensive to obtain in general. Fortunately, applying Frank-Wolfe to solve (3.1) actually gives us a closed-form LMO solution. We provide the solutions of LMO (Line 4 in Algorithm 1) for L2 norm and L∞ norm cases respectively:
vt = − λ · ∇f(xt) ‖∇f(xt)‖2 + xori, (L2 norm)
vt = −λ · sign(∇f(xt)) + xori. (L∞ norm)
The derivation can be found in the supplemental materials.
Note that when T = 1, λ = 1, substituting the above LMO solutions into Algorithm 1 yields the final update of x1 = x0 − γt · ∇f(xt), which reduces to FGSM 2 when γt = 1. Similar derivation also applies to L2 norm case. Therefore, just like PGD, our proposed Frank-Wolfe white-box attack also includes FGSM (FGM) as a one-step special instance.
3.4 FRANK-WOLFE BLACK-BOX ATTACKS
Next we consider the black-box setting, where we cannot perform back-propagation to calculate the gradient of the loss function anymore. Instead, we can only query the DNN system’s outputs with specific inputs. To clarify, here the output refers to the logit layer’s output (confidence scores for classification), not the final prediction label. The label-only setting is doable under our framework, but will incur extra difficulty such as designing new loss functions. For simplicity, here we consider the confidence score output.
We propose a zeroth-order Frank-Wolfe based algorithm to solve this problem. Algorithm 2 show our proposed Frank-Wolfe black-box attack algorithm. The key difference between our proposed black-box attack and white-box attack is one extra gradient estimation step, which is presented in Line 4 in Algorithm 2. Also note that for the final output, we provide two options. While option II is the common choice in practice, option I is also provided for the ease of theoretical analysis.
As many other zeroth-order optimization algorithms (Shamir, 2017; Flaxman et al., 2005), Algorithm 3 uses symmetric finite differences to estimate the gradient and therefore, gets rid of the dependence on back-propagation in white-box setting. Different from Chen et al. (2017c), here we do not utilize natural basis as our sensing vectors, instead, we provide two options: one is to use vectors uniformly sampled from Euclidean unit sphere and the other is to use vectors uniformly sampled from standard multivarite Gaussian distribution. This will greatly improve the gradient estimation efficiency comparing to sensing with natural basis as such option will only be able to estimate one coordinate of the gradient vector per query. In practice, both options here provide us competitive experimental results. It is worth noting that NES method (Wierstra et al., 2014) with antithetic sampling (Salimans et al., 2017) used in Ilyas et al. (2018) yields similar formula as our Option II in Algorithm 3.
2The extra clipping operation in FGSM is to project to the additional box constraint for image classification task. We will also need this clipping operation at the end of each iteration for specific tasks such as image classification.
Algorithm 2 Frank-Wolfe Black-box Attack Algorithm 1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori, target label ytar; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: qt = ZERO ORD GRAD EST(xt) // Algorithm 3 5: vt = argminv∈Xλ〈v,qt〉 6: dt = vt − xt 7: xt+1 = xt + γtdt 8: if λ > 1 then 9: xt+1 = PX (xt+1) 10: end if 11: end for 12: Option I: xa is uniformly random chosen from {xt}Tt=1 13: Option II: xa = xT 14: output: xa
Algorithm 3 Zeroth-Order Gradient Estimation (ZERO ORD GRAD EST) 1: parameters: number of gradient estimation samples b, sampling parameter δt; 2: q = 0 3: for i = 1, . . . , b do 4: Option I: Sample ui uniformly from the Euclidean unit sphere with ‖ui‖2 = 1
q = q+ d2δtb ( f(xt + δtui)− f(xt − δtui) ) ui
5: Option II: Sample ui uniformly from the standard Gaussian distribution N (0, I) q = q+ 12δtb ( f(xt + δtui)− f(xt − δtui) ) ui 6: end for 7: output: q
4 MAIN THEORY
In this section, we establish the convergence guarantees for our proposed Frank-Wolfe adversarial attack algorithms described in Section 3. First, we introduce the convergence criterion for our FrankWolfe adversarial attack framework.
4.1 CONVERGENCE CRITERION
The loss function for common DNN models are generally nonconvex. In addition, (3.1) is a constrained optimization. For such general nonconvex constrained optimization, we typically adopt the Frank-Wolfe gap as the convergence criterion (since gradient norm of f is no longer a proper criterion for constrained optimization problems):
g(xt) = max x∈X 〈x− xt,−∇f(xt)〉.
Note that for the Frank-Wolfe gap, we always have g(xt) ≥ 0 and xt is a stationary point for the constrained optimization problem if and only if g(xt) = 0. Also the Frank-Wolfe gap is affine invariant and do not tie to any specific choice of norm, which makes itself a perfect convergence criterion for Frank-Wolfe based algorithms.
4.2 CONVERGENCE GUARANTEE FOR FRANK-WOLFE WHITE-BOX ATTACK
Before we are going to provide the convergence guarantee of Frank-Wolfe white-box attack (Algorithm 1), we introduce the following assumptions that are essential to the convergence analysis.
Assumption 4.1. Function f(·) is L-smooth with respect to x, i.e., for any x,x′, it holds that
f(x′) ≤ f(x) +∇f(x)>(x′ − x) + L 2 ‖x′ − x‖22.
Assumption 4.1 is a standard assumption in nonconvex optimization, and is also adopted in other Frank-Wolfe literature such as Lacoste-Julien (2016); Reddi et al. (2016). Note that even though the smoothness assumption does not hold for general DNN models, a recent study (Santurkar et al., 2018) shows that batch normalization that is used in many modern DNNs such as Inception V3
model, actually makes the optimization landscape significantly smoother 3. This justifies the validity of Assumption 4.1.
Assumption 4.2. Set X is bounded with diameter D, i.e., ‖x− x′‖2 ≤ D for all x,x′ ∈ X .
Assumption 4.2 implies that the input space is bounded. For common tasks such as image classification, given the fact that images have bounded pixel range and is a small constant, this assumption trivially holds.
Now we present the theorem, which characterizes the convergence rate of our proposed Frank-Wolfe white-box adversarial attack algorithm presented in Algorithm 1.
Theorem 4.3. Under Assumptions 4.1 and 4.2, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ), denote g̃T = min1≤k≤T g(xk) where {xk}Tk=1 are iterates in Algorithm 1 with λ = 1, we have:
g̃T ≤ √ LD2(f(x0)− f(x∗))
2T ,
where x∗ is the optimal solution to (3.1).
Remark 4.4. Theorem 4.3 suggests that our proposed Frank-Wolfe white-box attack algorithm achieves aO(1/ √ T ) rate of convergence. Note that similar result has been proved in Lacoste-Julien (2016) under a different choice of step size.
4.3 CONVERGENCE GUARANTEE FOR FRANK-WOLFE BLACK-BOX ATTACK
Next we analyze the convergence of our proposed Frank-Wolfe black-box adversarial attack algorithm presented in Algorithm 2.
In order to prove the convergence of our proposed Frank-Wolfe black-box attack algorithm, we need the following additional assumption that ‖∇f(0)‖2 is bounded. Assumption 4.5. Gradient of f(·) at zero point∇f(0) satisfies maxy ‖∇f(0)‖2 ≤ Cg .
Following the analysis in Shamir (2017), let fδ(x) = Eu[f(x+δu)], which is the smoothed version of f(x). This smoothed function value plays a central role in our theoretical analysis, since it bridges the finite difference gradient approximation with the actual gradient. The following lemma shows this relationship.
Lemma 4.6. For the gradient estimator qt in Algorithm 3, its expectation and variance satisfy
E[qt] = ∇fδ(xt), E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
Now we are going to present the theorem, which characterizes the convergence rate of Algorithm 2. Theorem 4.7. Under Assumptions 4.1, 4.2 and 4.5, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ),
b = Td and δt = √ 2/Td2, suppose we use Option I in Algorithm 2 and option II for Algorithm 3, then the output xa from Algorithm 2 with λ = 1 satisfies:
E[g(xa)] ≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where x∗ is the optimal solution to (3.1). Remark 4.8. Theorem 4.7 suggests that Algorithm 2 also enjoys a O(1/ √ T ) rate of convergence. In terms of query complexity, the total number of queries needed is Tb = T 2d, which is linear in the data dimension d. In fact, in the experiment part, we observed that this number can be substantially smaller than d, e.g., b = 25, which is much lower than the theorem suggests. Note that although we only prove for option I in Algorithm 3, our result can be readily extended to Option II (the Gaussian sensing vector case).
3The original argument in Santurkar et al. (2018) refers to the smoothness with respect to each layer’s parameters. Note that the first layer’s parameters are in the mirror position (in terms of backpropagation) as the network inputs. Therefore, the argument in Santurkar et al. (2018) can also be applied here with respect to the network inputs.
5 EXPERIMENTS
In this section, we present the experimental results for our proposed Frank-Wolfe attack framework against other state-of-the-art adversarial attack algorithms in both white-box and black-box settings. All of our experiments are conducted on Amazon AWS p3.2xlarge servers which come with Intel Xeon E5 CPU and one NVIDIA Tesla V100 GPU (16G RAM). All experiments are implemented in Tensorflow platform version 1.10.0 within Python 3.6.4.
5.1 EVALUATION SETUP AND METRICS
We test the attack effectiveness of all algorithms by evaluating on a pre-trained Inception V3 model (Szegedy et al., 2016) and a ResNet V2 50 (He et al., 2016b) model that are trained on ImageNet dataset (Deng et al., 2009). The pre-trained Inception V3 model is reported to have a 78.0% top-1 accuracy and a 93.9% top-5 accuracy. The pre-trained ResNet V2 model is reported to have a 75.6% top-1 and a 92.8% top-5 accuracy. We randomly choose 500 images from the ImageNet validation set that are verified to be correctly classified by the pre-trained model and also randomly choose a target class for each image. Each image has a dimension of 299 × 299 × 3 and we test all attack algorithms through the same randomly chosen data samples and target labels.
We test for both L2 norm based and L∞ norm based attacks. In the white-box setting, we perform binary search / grid search for the best distortion parameter ( in our formulation and c in CW’s regularized formulation). In the black-box setting, for L2 norm based attack, we set = 5 and for L∞ based attack, we set = 0.05. For white-box attack, we restrict a maximum of 1, 000 iterations per attack for each method. And for black-box attack, we set a maximum query limit of 500, 000 per attack per image for each method.
For all algorithms, we stop the algorithm when a successful attack is found. For our proposed blackbox attack, we use option II in Algorithm 2 and test both options in Algorithm 3. We set the number of gradient estimation samples b = 25 for Algorithm 2. More detailed description on parameter settings can be found in the supplemental materials.
We evaluate the final performance through attack success rate where the success is defined as making the classifier output the exact target class label (not any incorrect labels). We also measure average attack time per image, average distortion (only on successful attacked samples) and average number of queries needed (only for black-box attack) per image. For a fair time comparison, even though some of the algorithms including ours can be written in batch form (attack multiple images at one time), all algorithms are set to attack one image at a time.
Due to page limit, we leave all experimental results on ResNet V2 model in the supplemental materials.
5.2 BASELINE METHODS
We compare the proposed algorithms with several state-of-the-art baseline algorithms. Specifically, we compare the proposed white-box attack algorithm with 4 (i) PGD (Madry et al., 2018) (which is essentially I-FGM (Kurakin et al., 2016)), (ii) CW attack (Carlini & Wagner, 2017) and (iii) EAD attack (Chen et al., 2017b). We compare the proposed black-box attack algorithm with (i) ZOO attack (Chen et al., 2017c) and (ii) NES-PGD attack (Ilyas et al., 2018).
5.3 WHITE-BOX ATTACK EXPERIMENTS
In this subsection, we present the white-box attack experiments on Inception V3 model. Tables 1 and 2 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. As we can observe from the tables, the attack success rate is 100% for every method. For the other baselines in theL2 norm case, CW method achieves the smallest average distortion, yet it comes with an expansive time cost. EAD method does not have either time advantage or distortion advantage in this experiment, probably due to its different motivation in attacking. PGD has moderate average distortion, yet it also costs quite some time to finish the attack. On the other hand, our proposed algorithm achieves the shortest attack time with moderate distortion. It significantly reduces the time complexity needed for attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either.
4We did not compare with FGM (FGSM) (Goodfellow et al., 2015) since it basically has zero success rate for targeted attack on Inception V3 or ResNet V2 models.
This is largely due to the original CW was designed for L2 norm attack, and in order to apply it to L∞ norm attack, special design is needed, which sacrifices its performance in terms of runtime. Again, our proposed white-box attack algorithm achieves the shortest average attack time and a moderate average distortion.
In Figure 1, we also examine the effect of λ in our proposed Frank-Wolfe white-box attack algorithm. We plot the objective loss function value of attacking one example against the number of iterations for both L2 and L∞ based white-box attack on Inception V3 model. From the plot, we can see that larger λ indeed leads to faster convergence.
5.4 BLACK-BOX ATTACK EXPERIMENTS
In this subsection, we present the black-box attack experiments on Inception V3 model. For blackbox attacks, attack success rate, time and number of queries needed are more meaningful evaluation metrics than distortion distances. Therefore, we omit all the grid search / binary search steps that are used in the white-box setting since extra time / queries are needed for finding parameters that can obtain better distortion distances.
Tables 3 and 4 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. For ZOO method, note that it only has the L2 norm version and it follows CW’s framework and thus uses different loss function and problem formulation (cannot exactly control the adversarial example to be within the distortion limit, we manage to keep the average distortion around for ZOO while other methods have average distortions very close to ). Furthermore, we can observe that ZOO is quite slow in this task. Attack on a single image can take up to 2 hours for ZOO and it is only able to achieve a 74.8% success rate (compared with the 88.9% success rate in the original paper, we think the main reason is the query limit here is only half of the query limit
in the original paper). NES-PGD method, while greatly improving ZOO’s performance, still cannot achieve 100% success rate in both attack models and takes relatively more time and queries. In sharp contrast, our proposed Frank-Wolfe black-box attacks (both option I and option II) achieve the highest success rate in both L2 norm and L∞ norm based black-box attacks and further largely improve the attack efficiency.
Figure 2 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attacks on Inception V3 model. As we can see from the plot, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the highest attack success rate and best efficiency (least queries needed for achieving the same success rate), especially in the L2 norm case.
6 CONCLUSIONS
In this work, we propose a Frank-Wolfe framework for efficient and effective adversarial attacks. Our proposed white-box and black-box attack algorithms enjoy an O(1/ √ T ) rate of convergence, and the query complexity of the proposed black-box attack algorithm is linear in data dimension d. Finally, our empirical study on attacking Inception V3 model with ImageNet dataset yields a 100% attack success rate for our proposed algorithms, even in the setting of black-box attack.
A LINEAR MINIMIZATION ORACLE (LMO) SOLUTIONS
Denote u = (v − xori)/(λ ), the linear minimization problem can be written as
min ‖v−xori‖p≤λ 〈v,∇f(xt)〉 = min ‖u‖p≤1 λ · 〈u,∇f(xt)〉
= max ‖u‖p≤1
λ · 〈u,−∇f(xt)〉
= λ · ‖∇f(xt)‖p∗,
where ‖ · ‖p∗ denotes the dual norm of ‖ · ‖p. For p = 2 case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖2.
It immediately implies that
v = −λ · ∇f(xt) ‖∇f(xt)‖2 + xori.
For p =∞ case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖1.
It immediately implies that
v = −λ · sign(∇f(xt)) + xori.
For the ease of comparison, we show the full update formula (before final projection step) for our algorithm. In detail, for p =∞ case, our algorithm takes the following update formulate:
xt+1 = (1− γt)xt + γtvt = (1− γt)xt − λγt · sign(∇f(xt)) + γt · xori = xt − λγt · sign(∇f(xt))− γt(xt − xori),
and for p = 2 case, it takes
xt+1 = xt − λγt · ∇f(xt) ‖∇f(xt)‖2 − γt(xt − xori).
Compared with PGD, the full update (before final projection step) of Frank-Wolfe white-box attack includes an extra parameter λ before the normalized gradient, as well as an extra term (xt − xori). This difference makes the behavior of Frank-Wolfe based attacks different from that of PGD based attacks.
B PROOF OF THE MAIN THEORY IN SECTION 4
B.1 PROOF OF THEOREM 4.3
Proof. For simplicity, we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2 ,
where the last inequality uses the bounded domain condition in Assumption 4.2. Note that by definition of the Frank-Wolfe gap, we have
f(xt+1) ≤ f(xt)− γg(xt) + LD2γ2
2 .
Summation over t of the above inequality, we obtain
f(xT ) ≤ f(x0)− T−1∑ k=0 γg(xk) + TLD2γ2 2
≤ f(x0)− γT g̃T + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃t. Note that by optimality we easily have f(xt+1) ≥ f(x∗). Rearrange the above inequality we have
g̃T ≤ f(x0)− f(x∗) Tγ + LD2γ 2
≤ √ LD2(f(x0)− f(x∗))
2T , where the second inequality is achieved when γ = √ 2(f(x0)− f(x∗))/(LD2T ).
B.2 PROOF OF LEMMA 4.6
Proof. For simplicity we denote f(·) by f(·) for the rest of the proof. Let us denote ψi = d 2δtb ( f(xt + δtui)− f(xt − δtui) ) ui. For the first part, we have
E[ψi] = Eu [ d
2δtb
( f(xt + δtui)− f(xt − δtui) ) ui ] = Eu [ d
2δtb f(xt + δtui)ui
] + Eu [ d
2δtb f(xt − δtui)(−ui) ] = Eu [ d
δtb f(xt + δtui)ui ] = 1
b ∇fδ(xt),
where the third equality holds due to symmetric property of ui and the last equality follows from Lemma 4.1(a) in Gao et al. (2018). Therefore, we have
E[qt] = E [ b∑ i=1 ψi ] = ∇fδ(xt).
For second part, note that ψi’s are independent from each other due to the independence of ui, we have
E‖qt − E[qt]‖22 = E ∥∥∥∥ b∑ i=1 [ ψi − Eψi ]∥∥∥∥2 2 = b∑ i=1 E ∥∥ψi − Eψi∥∥2 ≤ b∑ i=1 E ∥∥ψi∥∥2.
Now take a look at E ∥∥ψi∥∥2:
E ∥∥ψi∥∥2 = Eu∥∥∥∥ d2δtb(f(xt + δtui)− f(xt) + f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
≤ 1 2b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2 + 1 2b2 Eu ∥∥∥∥ dδt (f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
= 1 b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2
≤ 1 b2
( 2d‖∇f(xt)‖22 + 1
2 δ2tL
2d2 ) ,
where the first inequality is due to the fact that (a + b)2 ≤ 2a2 + 2b2, the second equality follows from the symmetric property of ui and the last inequality is by Lemma 4.1(b) in Gao et al. (2018). Also note that by Assumption 4.1 and 4.5 we have
‖∇f(xt)‖22 ≤ (‖∇f(0))‖2 + L‖xt‖2)2 ≤ (Cg + LD)2.
Combine all above results, we obtain
E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
B.3 PROOF OF THEOREM 4.7
Proof. For simplicity we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2
= f(xt) + γq > t (vt − xt) + γ(∇f(xt)− qt)>(vt − xt) +
LD2γ2
2 ,
where the second inequality uses the bounded domain condition in Assumption 4.2. Now define an auxiliary quantity:
v̂t = argmin v∈X
〈v,∇f(xt)〉.
According to the definition of g(xt), this immediately implies
g(xt) = 〈v̂t,∇f(xt)〉. Then we further have
f(xt+1) ≤ f(xt) + γq>t (v̂t − xt) + γ(∇f(xt)− qt)>(vt − xt) + LD2γ2
2
= f(xt) + γ∇f(xt)>(v̂t − xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
= f(xt)− γg(xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
≤ f(xt)− γg(xt) + γD · ‖∇f(xt)− qt‖2 + LD2γ2
2 ,
where the first inequality follows from the optimally of vt in Algorithm 2 and the last inequality holds due to Cauchy-Schwarz inequality. Take expectations for both sides of the above inequality, we have
E[f(xt+1)]
≤ E[f(xt)]− γE[g(xt)] + γD · E‖∇f(xt)− qt‖2 + LD2γ2
2 ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + E‖qt − E[qt]‖2 ) + LD2γ2
2 , ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + √ E‖qt − E[qt]‖22 ) + LD2γ2
2
≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)−∇fδ(xt)‖2 + √ 4d(Cg + LD)2 + δ2tL 2d2
2b
)
+ LD2γ2
2 ,
≤ E[f(xt)]− γE[g(xt)] + γD ·
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + LD2γ2
2 ,
where the second inequality follows from triangle inequality, the third inequality is due to Jenson’s inequality and the last inequality holds due to Lemma 4.6.
Summation over t of the above inequality, we obtain
E[f(xT )]
≤ f(x0)− T−1∑ t=0 γE[g(xt)] + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2
≤ f(x0)− γTga + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃a. Note that by the zeroth-order optimality, we have f(xt+1) ≥ f(x∗). Rearrange the above inequality we obtain
E[ga] ≤ f(x0)− f(x∗) Tγ + LD2γ 2 +D
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
)
≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where the second inequality is achieved by setting γ = √ 2(f(x0)− f(x∗))/(LD2T ), b = Td and
δt = √ 2/Td2.
C PARAMETERS SETTINGS FOR SECTION 5
For Frank-Wolfe white-box attack algorithm, we list the parameters we use in Section 5 at Table 5.
Similarly, for Frank-Wolfe black-box attack algorithm, we also list the parameters we use in Section 5 at Table 6.
We also list the hyperparameters we use for baseline algorithms. Specifically, for PGD, we set a step size of 0.05 for L2 case and 0.01 for L∞ case. For CW, we set a step size of 0.002 for L2 case step size of 0.005 for L∞ case. The confidence is set to 0 and we perform 10 times binary search for the constant starting from 0.01 (L2 case) and 0.001 (L∞ case). For EAD, we use a step size of 0.01 and the same binary search strategy as CW and β is set to be 0.001. In terms of black-box experiments, for ZOO, we set a step size of 0.01 and the initial constant is set to be 1 without binary search to achieve better query complexity. For NES-PGD, we set a step size of 0.3 for L2 case and 0.01 for L∞ case.
D ADDITIONAL EXPERIMENTS
D.1 RESNET V2 WHITE-BOX ATTACK RESULTS
In this subsection, we present the white-box attack experiments on ResNet V2 model. Tables 7 and 8 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. For the other baselines in the L2 norm case, surprisingly, CW method cannot achieve the best L2 distortion as it does in Inception V3 model. EAD method is relatively faster than CW in terms of attack time yet it has the largest distortion and a quite low success rate of 73.0%. PGD has the smallest average distortion in this setting, yet it also costs a lot of attack time. On the other hand, our proposed algorithm achieves the highest attack success rate within very short attack time with very small distortion. It significantly reduces the time complexity needed for effective attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either. Our proposed white-box attack algorithm, on the other hand, again achieves the shortest average attack time and 100% success rate.
D.2 RESNET V2 BLACK-BOX ATTACK RESULTS
In this subsection, we present the black-box experiments on ResNet V2 model. We again mainly focus on evaluating attack success rate, time and number of queries needed. In previous experiments on Inception V3 model, we show the performance of different black-box attack algorithms given enough number of queries (i.e., 500,000 per attack per image). And it shows that basically all
algorithms can achieve very high attack success rate (almost 100%). Now we examine a much harder case, where we reduce the the number of allowed queries per attack per image to only 50, 000. Tables 9 and 10 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. We still set = 5 for L2 case and = 0.05 for L∞ case.
For the L2 norm case, ZOO method barely succeeds due to the strict query limit of 50, 00 while it typically requires over 106 queries to attack successfully. Our proposed Frank-Wolfe black-box attacks, on the other hand, achieve nearly 60% attack success rate under such a stringent query budget. For the L∞ norm case, both NES-PGD method and ours achieve over 90% success rate. Even though they share similar average attack time and average number of queries needed, our Frank-Wolfe based methods still achieve the best in terms of all three evaluation metrics.
Figure 3 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attack on ResNet V2 model. Note that here we have a query limit of 50, 000, which is especially hard for the L2 norm case. As we can see from the Figure 3, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the best performance (highest attack success rate and smallest queries needed for achieving the same success rate).
D.3 VISUALIZATION EXAMPLES
For the completeness, we also provide some visual illustrations on the adversarial examples generated by various algorithms. Figure 4 shows some adversarial examples generated through different L2 norm based white-box attacks. Figure 5 shows some adversarial examples generated through different L∞ norm based black-box attacks. | 1. What is the main contribution of the paper regarding fast adversarial attacks using the Frank-Wolfe algorithm?
2. What are the strengths and weaknesses of the proposed method compared to other algorithms?
3. How does the reviewer assess the significance of the paper's novelty and its potential impact on the field?
4. What are the limitations of the white-box attack experiments, and how could they be improved to provide more meaningful comparisons?
5. Can the authors provide more intuition or empirical support for the effectiveness of their modified algorithm with lambda>1?
6. How does the reviewer evaluate the claim of the first zeroth-order non-convex FW convergence rate, and what is its significance in the field?
7. What are the similarities and differences between Alg. 1 for T>1 and I-FGM, and how do they intuitively work better?
8. Can the authors clarify the affine invariance of the Frank-Wolfe gap and its implications?
9. What minor errors or typos can be found in the paper, such as those related to grid search, binary search, and terminology usage? | Review | Review
The paper proposes using the Frank-Wolfe algorithm for fast adversarial attacks. They prove upper bounds on the Frank-Wolfe gap and show experimentally that they can attack successfully much faster than other algorithms. In general I find the paper novel (to the best of my somewhat limited knowledge), interesting and well written. However I find the white-box experiments lacking as almost every method has 100% success rate. Fixing this would significantly improve the paper.
Main remarks:
- Need more motivation for faster white-box attack. One good motivation for example is adversarial training, e.g. Kurakin et al 2017 ‘ADVERSARIAL MACHINE LEARNING AT SCALE’ that would benefit greatly from faster attacks
- White-box attack experiments don’t really prove the strength of the method, even with imagenet experiments, as almost all attacks get 100% success rate making it hard to compare. Need to compare in more challenging settings where the success rate is meaningful, e.g. smaller epsilon or a more robust NN using some defence. Also stating the 100% success rate in the abstract is a bit misleading for the this reason.
-Something is a bit weird with the FGM results. While it is a weaker attack, a 0%/100% disparity between it and every other
attack seems odd.
-The average distortion metric (that’s unfavourable to your method anyway) doesn’t really mean anything as the constraint optimization has no incentive to find a value smaller than the constraint.
- Regarding lambda>1, you write that “we argue this modification makes our algorithm more general, and gives rise to better attack results”. I did not see any theoretical or empirical support for this in the paper. Also, it seems quite strange to me that making the FW overshot and then projecting back would be beneficial. Some intuitive explanation on why this should help and/or empirical comparison would be a great addition.
- The authors claim that this is the first zeroth-order non-convex FW convergence rate, I am not familiar enough with the field to evaluate this claim and its significance.
- Alg. 1 for T>1 is very similar to I-FGM, but also ‘pulls’ x_t towards x_orig. It would be very useful to write the update more explicitly and compare and contrast this 2 very similar updates. This gives nice insight into why this should intuitively work better.
- I am not sure what the authors mean by “the Frank-Wolfe gap is affine invariant”. If we scale the input space by a, the gap should be scaled by a^2 - how/why is it invariant?
- I am not sure what you mean in 5.4 “we omit all grid search/ binary search steps…”
Minor remarks:
- In remark 4.8 in the end option I and II are inverted by mistake
- In 5.1, imagenet results are normally top-5 error rate not top-1 acc, would be better to report that more familiar number.
- In the proof you wrongfully use the term telescope sum twice, there is nothing telescopic about the sum it is just bound by the max value times the length. |
ICLR | Title
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Abstract
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/ √ T ) convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
N/A
√ T ) convergence rate. The empirical results of attacking Inception
V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the stateof-the-art.
1 INTRODUCTION
Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence such as image classification (Krizhevsky et al., 2012; He et al., 2016a), object detection (Ren et al., 2015; Girshick, 2015), and speech recognition (Mohamed et al., 2012; Bahdanau et al., 2016). However, recent studies show that deep neural networks can be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015) – a tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. Soon later this is proved to be not a coincidence: similar phenomena have been observed in other problems such as speech recognition (Carlini et al., 2016), visual QA (Xu et al., 2017), image captioning (Chen et al., 2017a), machine translation (Cheng et al., 2018), reinforcement learning (Pattanaik et al., 2018), and even on systems that operate in the physical world (Kurakin et al., 2016).
Depending on how much information an adversary can access to, adversarial attacks can be classified into two classes: white-box attack (Szegedy et al., 2013; Goodfellow et al., 2015) and black-box attack (Papernot et al., 2016a; Chen et al., 2017c). In the white-box setting, the adversary has full access to the target model, while in the black-box setting, the adversary can only access the input and output of the target model but not its internal configurations. Among the approaches proposed for white-box and black-box attacks, optimization-based methods (Carlini & Wagner, 2017; Chen et al., 2017b;c; Ilyas et al., 2018) are most effective: they usually achieve relatively low distortions and high attack success rates. However, these methods are far from efficient. In the white-box setting, they need to solve constrained optimization problems (Carlini & Wagner, 2017), and are usually significantly slower than Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) or Iterative FGSM (I-FGM) (Kurakin et al., 2016). Applying those methods with one or two examples are fine, yet in the case of attacking hundreds of thousands examples, e.g. in adversarial training (Kurakin et al., 2016; Madry et al., 2018), this is far from satisfactory.
In the black-box setting, it becomes even more severe since they need to make gradient estimations (Chen et al., 2017c). Therefore, a large number of queries are needed for them to perform a successful attack, especially when the data dimension is large. For example, attacking a 299×299×3 Imagenet image may take them hundreds of thousands of queries. This significantly limits their prac-
tical usefulness since they can be easily defeated by limiting the number of queries that an adversary can make to the target model.
In this study, we aim to examine the following questions in this study:
Can we improve the efficiency of the optimization-based attack algorithms? In other words, can we use less time and queries to conduct adversarial attacks?
In this work, we provide an affirmative answer to this question by proposing an efficient FrankWolfe optimization framework for both white-box and black-box attacks. In summary, we make the following main contributions:
• We propose a novel Frank-Wolfe based adversarial attack framework. The white-box attack algorithm is an iterative first-order method which admits the fast gradient sign method (FGSM) as the one-step special case. And the corresponding black-box attack algorithm adopts zeroth-order optimization with two sensing vector options (either from the Euclidean unit sphere or from the standard Gaussian distribution) provided. • We show that the proposed white-box and black-box attack algorithms enjoy an O(1/ √ T )
convergence rate. Also we show that the query complexity of the proposed black-box attack algorithm is linear in data dimension d. • Our empirical results on attacking Inception V3 model with the ImageNet dataset show that
(i) the proposed white-box attack algorithm is more efficient than all the baseline whitebox algorithms evaluated here, and (ii) the proposed black-box attack algorithm is highly efficient and is also the only one algorithm that achieves a 100% attack success rate.
2 RELATED WORK
There is a large body of work on adversarial attacks. In this section, we review the most relevant work in both white-box and black-box attack settings, as well as the non-convex Frank-Wolfe optimization.
White-box Attacks: Szegedy et al. (2013) proposed to use box-constrained L-BFGS algorithm for conducting white-box attacks. Goodfellow et al. (2015) proposed the Fast Gradient Sign Method (FGSM) based on linearization of the network as a simple alternative to L-BFGS. Kurakin et al. (2016) proposed to iteratively perform one-step FGSM (Goodfellow et al., 2015) algorithm and clips the adversarial point back to the distortion limit after every iteration. It is called Basic Iterative Method (BIM) or I-FGM in the literature. Madry et al. (2018) showed that for the L∞ norm case, BIM/I-FGM is equivalent to Projected Gradient Descent (PGD), which is a standard tool for constrained optimization. Papernot et al. (2016b) proposed JSMA to greedily attack the most significant pixel based on the Jacobian-based saliency map. Moosavi-Dezfooli et al. (2016) proposed attack methods by projecting the data to the closest separating hyperplane. Carlini & Wagner (2017) introduced the so-called CW attack by proposing multiple new loss functions for generating adversarial examples. Chen et al. (2017b) followed CW’s framework and use an Elastic Net term as the distortion penalty.
Black-box Attacks: One popular family of black-box attacks (Hu & Tan, 2017; Papernot et al., 2016a; 2017) is based on the transferability of adversarial examples (Liu et al., 2018; Bhagoji et al., 2017), where an adversarial example generated for one DNN may be reused to attack other neural networks. This allows the adversary to construct a substitute model that mimics the targeted DNN, and then attack the constructed substitute model using white-box attack methods. However, this type of attack algorithms usually suffer from large distortions and relatively low success rates (Chen et al., 2017c). To address this issue, Chen et al. (2017c) proposed the Zeroth-Order Optimization (ZOO) algorithm that extends the CW attack to the black-box setting and uses a zeroth-order optimization approach to conduct the attack. Although ZOO achieves much higher attack success rates than the substitute model-based black-box attacks, it suffers from a poor query complexity since its naive implementation requires to estimate the gradients of all the coordinates (pixels) of the image. To improve its query complexity, several approaches have been proposed. For example, Tu et al. (2018) introduces an adaptive random gradient estimation algorithm and a well-trained Autoencoder to speed up the attack process. Ilyas et al. (2018) and Liu et al. (2018) improved ZOO’s query complexity by using Natural Evolutionary Strategies (NES) (Wierstra et al., 2014; Salimans et al., 2017) and active learning, respectively.
Non-convex Frank-Wolfe Algorithms: The Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient method, is an iterative optimization method for constrained optimization problem. Jaggi (2013) revisited Frank-Wolfe algorithm in 2013 and provided a stronger and more general convergence analysis in the convex setting. Yu et al. (2017) proved the first convergence rate for Frank-Wolfe type algorithm in the non-convex setting. Lacoste-Julien (2016) provided the convergence guarantee for Frank-Wolfe algorithm in the non-convex setting with adaptive step sizes. Reddi et al. (2016) further studied the convergence rate of non-convex stochastic Frank-Wolfe algorithm in the finite-sun optimization setting. Very recently, Staib & Jegelka (2017) proposed to use Frank-Wolfe for distributionally robust training (Sinha et al., 2018). Balasubramanian & Ghadimi (2018) proved the convergence rate for zeroth-order nonconvex Frank-Wolfe algorithm using one-side finite difference gradient estimator with standard Gaussian sensing vectors.
3 METHODOLOGY
3.1 NOTATIONS
Throughout the paper, scalars are denoted by lower case letters, vectors by lower case bold face letters and sets by calligraphy upper cae letters. For a vector x ∈ Rd, we denote the Lp norm of x by ‖x‖p = ( ∑d i=1 x p i )
1/p. Specially, for p = ∞, the L∞ norm of x by ‖x‖∞ = maxdi=1 |θi|. We denote PX (x) as the projection operation of projecting vector x into the set X .
3.2 PROBLEM FORMULATION
According to the attack purposes, attacks can be divided into two categories: untargeted attack and targeted attack. In particular, untargeted attack aims to turn the prediction into any incorrect label, while the targeted attack, which is considerably harder, requires to mislead the classifier to a specific target class. In this work, we follow the literature (Carlini & Wagner, 2017; Ilyas et al., 2018) and focus on the strictly harder targeted attack setting. It is worth noting that our proposed algorithm can be extended to untargeted attack straightforwardly.
Let us define f(·) as the classification loss function of the targeted DNN. For targeted attacks, we aim to learn an adversarial example x that is close enough to the original input xori and can be misclassified to the target class ytar. The corresponding optimization problem 1 is defined as:
minx f(x, ytar)
subject to ‖x− xori‖p ≤ . (3.1)
Evidently, the constraint set X := {x | ‖x − xori‖p ≤ } is a bounded convex set when p ≥ 1. Normally, p = 2 and p =∞ are used to measure the distortions ‖x− xori‖p, resulting in L2 attack model and L∞ attack model respectively. In this work, we study both attack models. In the sequel, since we mainly focus on the targeted attack case, we use f(x) to denote f(x, ytar) for simplicity.
3.3 FRANK-WOLFE WHITE-BOX ATTACKS
Frank-Wolfe algorithm (Frank & Wolfe, 1956), also known as the conditional gradient descent, is a popular optimization tool for constrained optimization. Different from PGD that first performs gradient descent followed by a projection step at each iteration, Frank-Wolfe algorithm calls a Linear Minimization Oracle (LMO) over the the constraint set X at each iteration, i.e.,
LMO ∈ argmin v∈X 〈v,∇f(xt)〉.
The LMO can be seen as the minimization of the first-order Taylor expansion of f(·) at point xt:
min v∈X
f(xt) + 〈v − xt,∇f(xt)〉.
By calling LMO, Frank Wolfe solves the linear problem in X and then perform weighted average with previous iterate to obtain the final update formula.
We present our proposed Frank-Wolfe white-box attack algorithm in Algorithm 1, which is built upon the original Frank-Wolfe algorithm. The key difference between Algorithm 1 and the standard Frank-Wolfe algorithm is in Line 4, where the LMO is called over a slightly relaxed constraint set
1Note that there is usually an additional constraint on the input variable x, e.g., x ∈ [0, 1]n for normalized image inputs.
Xλ := {x | ‖x−xori‖p ≤ λ } with λ ≥ 1, instead of the original constraint set X . When λ = 1, set Xλ reduces to X , and Algorithm 1 reduces to standard Frank Wolfe. We argue that this modification makes our algorithm more general, and gives rise to better attack results. Algorithm 1 Frank-Wolfe White-box Attack Algorithm
1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: vt = argminv∈Xλ〈v,∇f(xt)〉 // LMO 5: dt = vt − xt 6: xt+1 = xt + γtdt 7: if λ > 1 then 8: xt+1 = PX (xt+1) 9: end if
10: end for 11: output: xT
The LMO solution itself can be expensive to obtain in general. Fortunately, applying Frank-Wolfe to solve (3.1) actually gives us a closed-form LMO solution. We provide the solutions of LMO (Line 4 in Algorithm 1) for L2 norm and L∞ norm cases respectively:
vt = − λ · ∇f(xt) ‖∇f(xt)‖2 + xori, (L2 norm)
vt = −λ · sign(∇f(xt)) + xori. (L∞ norm)
The derivation can be found in the supplemental materials.
Note that when T = 1, λ = 1, substituting the above LMO solutions into Algorithm 1 yields the final update of x1 = x0 − γt · ∇f(xt), which reduces to FGSM 2 when γt = 1. Similar derivation also applies to L2 norm case. Therefore, just like PGD, our proposed Frank-Wolfe white-box attack also includes FGSM (FGM) as a one-step special instance.
3.4 FRANK-WOLFE BLACK-BOX ATTACKS
Next we consider the black-box setting, where we cannot perform back-propagation to calculate the gradient of the loss function anymore. Instead, we can only query the DNN system’s outputs with specific inputs. To clarify, here the output refers to the logit layer’s output (confidence scores for classification), not the final prediction label. The label-only setting is doable under our framework, but will incur extra difficulty such as designing new loss functions. For simplicity, here we consider the confidence score output.
We propose a zeroth-order Frank-Wolfe based algorithm to solve this problem. Algorithm 2 show our proposed Frank-Wolfe black-box attack algorithm. The key difference between our proposed black-box attack and white-box attack is one extra gradient estimation step, which is presented in Line 4 in Algorithm 2. Also note that for the final output, we provide two options. While option II is the common choice in practice, option I is also provided for the ease of theoretical analysis.
As many other zeroth-order optimization algorithms (Shamir, 2017; Flaxman et al., 2005), Algorithm 3 uses symmetric finite differences to estimate the gradient and therefore, gets rid of the dependence on back-propagation in white-box setting. Different from Chen et al. (2017c), here we do not utilize natural basis as our sensing vectors, instead, we provide two options: one is to use vectors uniformly sampled from Euclidean unit sphere and the other is to use vectors uniformly sampled from standard multivarite Gaussian distribution. This will greatly improve the gradient estimation efficiency comparing to sensing with natural basis as such option will only be able to estimate one coordinate of the gradient vector per query. In practice, both options here provide us competitive experimental results. It is worth noting that NES method (Wierstra et al., 2014) with antithetic sampling (Salimans et al., 2017) used in Ilyas et al. (2018) yields similar formula as our Option II in Algorithm 3.
2The extra clipping operation in FGSM is to project to the additional box constraint for image classification task. We will also need this clipping operation at the end of each iteration for specific tasks such as image classification.
Algorithm 2 Frank-Wolfe Black-box Attack Algorithm 1: input: number of iterations T , step sizes {γt}, λ > 0, original image xori, target label ytar; 2: x0 = xori 3: for t = 0, . . . , T − 1 do 4: qt = ZERO ORD GRAD EST(xt) // Algorithm 3 5: vt = argminv∈Xλ〈v,qt〉 6: dt = vt − xt 7: xt+1 = xt + γtdt 8: if λ > 1 then 9: xt+1 = PX (xt+1) 10: end if 11: end for 12: Option I: xa is uniformly random chosen from {xt}Tt=1 13: Option II: xa = xT 14: output: xa
Algorithm 3 Zeroth-Order Gradient Estimation (ZERO ORD GRAD EST) 1: parameters: number of gradient estimation samples b, sampling parameter δt; 2: q = 0 3: for i = 1, . . . , b do 4: Option I: Sample ui uniformly from the Euclidean unit sphere with ‖ui‖2 = 1
q = q+ d2δtb ( f(xt + δtui)− f(xt − δtui) ) ui
5: Option II: Sample ui uniformly from the standard Gaussian distribution N (0, I) q = q+ 12δtb ( f(xt + δtui)− f(xt − δtui) ) ui 6: end for 7: output: q
4 MAIN THEORY
In this section, we establish the convergence guarantees for our proposed Frank-Wolfe adversarial attack algorithms described in Section 3. First, we introduce the convergence criterion for our FrankWolfe adversarial attack framework.
4.1 CONVERGENCE CRITERION
The loss function for common DNN models are generally nonconvex. In addition, (3.1) is a constrained optimization. For such general nonconvex constrained optimization, we typically adopt the Frank-Wolfe gap as the convergence criterion (since gradient norm of f is no longer a proper criterion for constrained optimization problems):
g(xt) = max x∈X 〈x− xt,−∇f(xt)〉.
Note that for the Frank-Wolfe gap, we always have g(xt) ≥ 0 and xt is a stationary point for the constrained optimization problem if and only if g(xt) = 0. Also the Frank-Wolfe gap is affine invariant and do not tie to any specific choice of norm, which makes itself a perfect convergence criterion for Frank-Wolfe based algorithms.
4.2 CONVERGENCE GUARANTEE FOR FRANK-WOLFE WHITE-BOX ATTACK
Before we are going to provide the convergence guarantee of Frank-Wolfe white-box attack (Algorithm 1), we introduce the following assumptions that are essential to the convergence analysis.
Assumption 4.1. Function f(·) is L-smooth with respect to x, i.e., for any x,x′, it holds that
f(x′) ≤ f(x) +∇f(x)>(x′ − x) + L 2 ‖x′ − x‖22.
Assumption 4.1 is a standard assumption in nonconvex optimization, and is also adopted in other Frank-Wolfe literature such as Lacoste-Julien (2016); Reddi et al. (2016). Note that even though the smoothness assumption does not hold for general DNN models, a recent study (Santurkar et al., 2018) shows that batch normalization that is used in many modern DNNs such as Inception V3
model, actually makes the optimization landscape significantly smoother 3. This justifies the validity of Assumption 4.1.
Assumption 4.2. Set X is bounded with diameter D, i.e., ‖x− x′‖2 ≤ D for all x,x′ ∈ X .
Assumption 4.2 implies that the input space is bounded. For common tasks such as image classification, given the fact that images have bounded pixel range and is a small constant, this assumption trivially holds.
Now we present the theorem, which characterizes the convergence rate of our proposed Frank-Wolfe white-box adversarial attack algorithm presented in Algorithm 1.
Theorem 4.3. Under Assumptions 4.1 and 4.2, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ), denote g̃T = min1≤k≤T g(xk) where {xk}Tk=1 are iterates in Algorithm 1 with λ = 1, we have:
g̃T ≤ √ LD2(f(x0)− f(x∗))
2T ,
where x∗ is the optimal solution to (3.1).
Remark 4.4. Theorem 4.3 suggests that our proposed Frank-Wolfe white-box attack algorithm achieves aO(1/ √ T ) rate of convergence. Note that similar result has been proved in Lacoste-Julien (2016) under a different choice of step size.
4.3 CONVERGENCE GUARANTEE FOR FRANK-WOLFE BLACK-BOX ATTACK
Next we analyze the convergence of our proposed Frank-Wolfe black-box adversarial attack algorithm presented in Algorithm 2.
In order to prove the convergence of our proposed Frank-Wolfe black-box attack algorithm, we need the following additional assumption that ‖∇f(0)‖2 is bounded. Assumption 4.5. Gradient of f(·) at zero point∇f(0) satisfies maxy ‖∇f(0)‖2 ≤ Cg .
Following the analysis in Shamir (2017), let fδ(x) = Eu[f(x+δu)], which is the smoothed version of f(x). This smoothed function value plays a central role in our theoretical analysis, since it bridges the finite difference gradient approximation with the actual gradient. The following lemma shows this relationship.
Lemma 4.6. For the gradient estimator qt in Algorithm 3, its expectation and variance satisfy
E[qt] = ∇fδ(xt), E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
Now we are going to present the theorem, which characterizes the convergence rate of Algorithm 2. Theorem 4.7. Under Assumptions 4.1, 4.2 and 4.5, let γt = γ = √ 2(f(x0)− f(x∗))/(LD2T ),
b = Td and δt = √ 2/Td2, suppose we use Option I in Algorithm 2 and option II for Algorithm 3, then the output xa from Algorithm 2 with λ = 1 satisfies:
E[g(xa)] ≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where x∗ is the optimal solution to (3.1). Remark 4.8. Theorem 4.7 suggests that Algorithm 2 also enjoys a O(1/ √ T ) rate of convergence. In terms of query complexity, the total number of queries needed is Tb = T 2d, which is linear in the data dimension d. In fact, in the experiment part, we observed that this number can be substantially smaller than d, e.g., b = 25, which is much lower than the theorem suggests. Note that although we only prove for option I in Algorithm 3, our result can be readily extended to Option II (the Gaussian sensing vector case).
3The original argument in Santurkar et al. (2018) refers to the smoothness with respect to each layer’s parameters. Note that the first layer’s parameters are in the mirror position (in terms of backpropagation) as the network inputs. Therefore, the argument in Santurkar et al. (2018) can also be applied here with respect to the network inputs.
5 EXPERIMENTS
In this section, we present the experimental results for our proposed Frank-Wolfe attack framework against other state-of-the-art adversarial attack algorithms in both white-box and black-box settings. All of our experiments are conducted on Amazon AWS p3.2xlarge servers which come with Intel Xeon E5 CPU and one NVIDIA Tesla V100 GPU (16G RAM). All experiments are implemented in Tensorflow platform version 1.10.0 within Python 3.6.4.
5.1 EVALUATION SETUP AND METRICS
We test the attack effectiveness of all algorithms by evaluating on a pre-trained Inception V3 model (Szegedy et al., 2016) and a ResNet V2 50 (He et al., 2016b) model that are trained on ImageNet dataset (Deng et al., 2009). The pre-trained Inception V3 model is reported to have a 78.0% top-1 accuracy and a 93.9% top-5 accuracy. The pre-trained ResNet V2 model is reported to have a 75.6% top-1 and a 92.8% top-5 accuracy. We randomly choose 500 images from the ImageNet validation set that are verified to be correctly classified by the pre-trained model and also randomly choose a target class for each image. Each image has a dimension of 299 × 299 × 3 and we test all attack algorithms through the same randomly chosen data samples and target labels.
We test for both L2 norm based and L∞ norm based attacks. In the white-box setting, we perform binary search / grid search for the best distortion parameter ( in our formulation and c in CW’s regularized formulation). In the black-box setting, for L2 norm based attack, we set = 5 and for L∞ based attack, we set = 0.05. For white-box attack, we restrict a maximum of 1, 000 iterations per attack for each method. And for black-box attack, we set a maximum query limit of 500, 000 per attack per image for each method.
For all algorithms, we stop the algorithm when a successful attack is found. For our proposed blackbox attack, we use option II in Algorithm 2 and test both options in Algorithm 3. We set the number of gradient estimation samples b = 25 for Algorithm 2. More detailed description on parameter settings can be found in the supplemental materials.
We evaluate the final performance through attack success rate where the success is defined as making the classifier output the exact target class label (not any incorrect labels). We also measure average attack time per image, average distortion (only on successful attacked samples) and average number of queries needed (only for black-box attack) per image. For a fair time comparison, even though some of the algorithms including ours can be written in batch form (attack multiple images at one time), all algorithms are set to attack one image at a time.
Due to page limit, we leave all experimental results on ResNet V2 model in the supplemental materials.
5.2 BASELINE METHODS
We compare the proposed algorithms with several state-of-the-art baseline algorithms. Specifically, we compare the proposed white-box attack algorithm with 4 (i) PGD (Madry et al., 2018) (which is essentially I-FGM (Kurakin et al., 2016)), (ii) CW attack (Carlini & Wagner, 2017) and (iii) EAD attack (Chen et al., 2017b). We compare the proposed black-box attack algorithm with (i) ZOO attack (Chen et al., 2017c) and (ii) NES-PGD attack (Ilyas et al., 2018).
5.3 WHITE-BOX ATTACK EXPERIMENTS
In this subsection, we present the white-box attack experiments on Inception V3 model. Tables 1 and 2 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. As we can observe from the tables, the attack success rate is 100% for every method. For the other baselines in theL2 norm case, CW method achieves the smallest average distortion, yet it comes with an expansive time cost. EAD method does not have either time advantage or distortion advantage in this experiment, probably due to its different motivation in attacking. PGD has moderate average distortion, yet it also costs quite some time to finish the attack. On the other hand, our proposed algorithm achieves the shortest attack time with moderate distortion. It significantly reduces the time complexity needed for attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either.
4We did not compare with FGM (FGSM) (Goodfellow et al., 2015) since it basically has zero success rate for targeted attack on Inception V3 or ResNet V2 models.
This is largely due to the original CW was designed for L2 norm attack, and in order to apply it to L∞ norm attack, special design is needed, which sacrifices its performance in terms of runtime. Again, our proposed white-box attack algorithm achieves the shortest average attack time and a moderate average distortion.
In Figure 1, we also examine the effect of λ in our proposed Frank-Wolfe white-box attack algorithm. We plot the objective loss function value of attacking one example against the number of iterations for both L2 and L∞ based white-box attack on Inception V3 model. From the plot, we can see that larger λ indeed leads to faster convergence.
5.4 BLACK-BOX ATTACK EXPERIMENTS
In this subsection, we present the black-box attack experiments on Inception V3 model. For blackbox attacks, attack success rate, time and number of queries needed are more meaningful evaluation metrics than distortion distances. Therefore, we omit all the grid search / binary search steps that are used in the white-box setting since extra time / queries are needed for finding parameters that can obtain better distortion distances.
Tables 3 and 4 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. For ZOO method, note that it only has the L2 norm version and it follows CW’s framework and thus uses different loss function and problem formulation (cannot exactly control the adversarial example to be within the distortion limit, we manage to keep the average distortion around for ZOO while other methods have average distortions very close to ). Furthermore, we can observe that ZOO is quite slow in this task. Attack on a single image can take up to 2 hours for ZOO and it is only able to achieve a 74.8% success rate (compared with the 88.9% success rate in the original paper, we think the main reason is the query limit here is only half of the query limit
in the original paper). NES-PGD method, while greatly improving ZOO’s performance, still cannot achieve 100% success rate in both attack models and takes relatively more time and queries. In sharp contrast, our proposed Frank-Wolfe black-box attacks (both option I and option II) achieve the highest success rate in both L2 norm and L∞ norm based black-box attacks and further largely improve the attack efficiency.
Figure 2 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attacks on Inception V3 model. As we can see from the plot, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the highest attack success rate and best efficiency (least queries needed for achieving the same success rate), especially in the L2 norm case.
6 CONCLUSIONS
In this work, we propose a Frank-Wolfe framework for efficient and effective adversarial attacks. Our proposed white-box and black-box attack algorithms enjoy an O(1/ √ T ) rate of convergence, and the query complexity of the proposed black-box attack algorithm is linear in data dimension d. Finally, our empirical study on attacking Inception V3 model with ImageNet dataset yields a 100% attack success rate for our proposed algorithms, even in the setting of black-box attack.
A LINEAR MINIMIZATION ORACLE (LMO) SOLUTIONS
Denote u = (v − xori)/(λ ), the linear minimization problem can be written as
min ‖v−xori‖p≤λ 〈v,∇f(xt)〉 = min ‖u‖p≤1 λ · 〈u,∇f(xt)〉
= max ‖u‖p≤1
λ · 〈u,−∇f(xt)〉
= λ · ‖∇f(xt)‖p∗,
where ‖ · ‖p∗ denotes the dual norm of ‖ · ‖p. For p = 2 case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖2.
It immediately implies that
v = −λ · ∇f(xt) ‖∇f(xt)‖2 + xori.
For p =∞ case, we have
〈(v − xori)/(λ ),−∇f(xt)〉 = ‖∇f(xt)‖1.
It immediately implies that
v = −λ · sign(∇f(xt)) + xori.
For the ease of comparison, we show the full update formula (before final projection step) for our algorithm. In detail, for p =∞ case, our algorithm takes the following update formulate:
xt+1 = (1− γt)xt + γtvt = (1− γt)xt − λγt · sign(∇f(xt)) + γt · xori = xt − λγt · sign(∇f(xt))− γt(xt − xori),
and for p = 2 case, it takes
xt+1 = xt − λγt · ∇f(xt) ‖∇f(xt)‖2 − γt(xt − xori).
Compared with PGD, the full update (before final projection step) of Frank-Wolfe white-box attack includes an extra parameter λ before the normalized gradient, as well as an extra term (xt − xori). This difference makes the behavior of Frank-Wolfe based attacks different from that of PGD based attacks.
B PROOF OF THE MAIN THEORY IN SECTION 4
B.1 PROOF OF THEOREM 4.3
Proof. For simplicity, we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2 ,
where the last inequality uses the bounded domain condition in Assumption 4.2. Note that by definition of the Frank-Wolfe gap, we have
f(xt+1) ≤ f(xt)− γg(xt) + LD2γ2
2 .
Summation over t of the above inequality, we obtain
f(xT ) ≤ f(x0)− T−1∑ k=0 γg(xk) + TLD2γ2 2
≤ f(x0)− γT g̃T + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃t. Note that by optimality we easily have f(xt+1) ≥ f(x∗). Rearrange the above inequality we have
g̃T ≤ f(x0)− f(x∗) Tγ + LD2γ 2
≤ √ LD2(f(x0)− f(x∗))
2T , where the second inequality is achieved when γ = √ 2(f(x0)− f(x∗))/(LD2T ).
B.2 PROOF OF LEMMA 4.6
Proof. For simplicity we denote f(·) by f(·) for the rest of the proof. Let us denote ψi = d 2δtb ( f(xt + δtui)− f(xt − δtui) ) ui. For the first part, we have
E[ψi] = Eu [ d
2δtb
( f(xt + δtui)− f(xt − δtui) ) ui ] = Eu [ d
2δtb f(xt + δtui)ui
] + Eu [ d
2δtb f(xt − δtui)(−ui) ] = Eu [ d
δtb f(xt + δtui)ui ] = 1
b ∇fδ(xt),
where the third equality holds due to symmetric property of ui and the last equality follows from Lemma 4.1(a) in Gao et al. (2018). Therefore, we have
E[qt] = E [ b∑ i=1 ψi ] = ∇fδ(xt).
For second part, note that ψi’s are independent from each other due to the independence of ui, we have
E‖qt − E[qt]‖22 = E ∥∥∥∥ b∑ i=1 [ ψi − Eψi ]∥∥∥∥2 2 = b∑ i=1 E ∥∥ψi − Eψi∥∥2 ≤ b∑ i=1 E ∥∥ψi∥∥2.
Now take a look at E ∥∥ψi∥∥2:
E ∥∥ψi∥∥2 = Eu∥∥∥∥ d2δtb(f(xt + δtui)− f(xt) + f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
≤ 1 2b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2 + 1 2b2 Eu ∥∥∥∥ dδt (f(xt)− f(xt − δtui))ui ∥∥∥∥2 2
= 1 b2 Eu ∥∥∥∥ dδt (f(xt + δtui)− f(xt))ui ∥∥∥∥2 2
≤ 1 b2
( 2d‖∇f(xt)‖22 + 1
2 δ2tL
2d2 ) ,
where the first inequality is due to the fact that (a + b)2 ≤ 2a2 + 2b2, the second equality follows from the symmetric property of ui and the last inequality is by Lemma 4.1(b) in Gao et al. (2018). Also note that by Assumption 4.1 and 4.5 we have
‖∇f(xt)‖22 ≤ (‖∇f(0))‖2 + L‖xt‖2)2 ≤ (Cg + LD)2.
Combine all above results, we obtain
E‖qt − E[qt]‖22 ≤ 1
b
( 2d(Cg + LD) 2 + 1
2 δ2tL
2d2 ) .
B.3 PROOF OF THEOREM 4.7
Proof. For simplicity we denote f(xt) by f(xt) for the rest of the proof. First by Assumption 4.1, we have
f(xt+1) ≤ f(xt) +∇f(xt)>(xt+1 − xt) + L
2 ‖xt+1 − xt‖22
= f(xt) + γ∇f(xt)>(vt − xt) + Lγ2
2 ‖vt − xt‖22
≤ f(xt) + γ∇f(xt)>(vt − xt) + LD2γ2
2
= f(xt) + γq > t (vt − xt) + γ(∇f(xt)− qt)>(vt − xt) +
LD2γ2
2 ,
where the second inequality uses the bounded domain condition in Assumption 4.2. Now define an auxiliary quantity:
v̂t = argmin v∈X
〈v,∇f(xt)〉.
According to the definition of g(xt), this immediately implies
g(xt) = 〈v̂t,∇f(xt)〉. Then we further have
f(xt+1) ≤ f(xt) + γq>t (v̂t − xt) + γ(∇f(xt)− qt)>(vt − xt) + LD2γ2
2
= f(xt) + γ∇f(xt)>(v̂t − xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
= f(xt)− γg(xt) + γ(∇f(xt)− qt)>(vt − v̂t) + LD2γ2
2
≤ f(xt)− γg(xt) + γD · ‖∇f(xt)− qt‖2 + LD2γ2
2 ,
where the first inequality follows from the optimally of vt in Algorithm 2 and the last inequality holds due to Cauchy-Schwarz inequality. Take expectations for both sides of the above inequality, we have
E[f(xt+1)]
≤ E[f(xt)]− γE[g(xt)] + γD · E‖∇f(xt)− qt‖2 + LD2γ2
2 ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + E‖qt − E[qt]‖2 ) + LD2γ2
2 , ≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)− E[qt]‖2 + √ E‖qt − E[qt]‖22 ) + LD2γ2
2
≤ E[f(xt)]− γE[g(xt)] + γD · ( ‖∇f(xt)−∇fδ(xt)‖2 + √ 4d(Cg + LD)2 + δ2tL 2d2
2b
)
+ LD2γ2
2 ,
≤ E[f(xt)]− γE[g(xt)] + γD ·
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + LD2γ2
2 ,
where the second inequality follows from triangle inequality, the third inequality is due to Jenson’s inequality and the last inequality holds due to Lemma 4.6.
Summation over t of the above inequality, we obtain
E[f(xT )]
≤ f(x0)− T−1∑ t=0 γE[g(xt)] + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2
≤ f(x0)− γTga + γDT
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
) + TLD2γ2
2 ,
where the second inequality follows from the definition of g̃a. Note that by the zeroth-order optimality, we have f(xt+1) ≥ f(x∗). Rearrange the above inequality we obtain
E[ga] ≤ f(x0)− f(x∗) Tγ + LD2γ 2 +D
( δtLd
2 +
2 √ d(Cg + LD) + δtLd√
2b
)
≤ D√ 2T
(√ L(f(x0)− f(x∗)) + 2(L+ Cg + LD) ) ,
where the second inequality is achieved by setting γ = √ 2(f(x0)− f(x∗))/(LD2T ), b = Td and
δt = √ 2/Td2.
C PARAMETERS SETTINGS FOR SECTION 5
For Frank-Wolfe white-box attack algorithm, we list the parameters we use in Section 5 at Table 5.
Similarly, for Frank-Wolfe black-box attack algorithm, we also list the parameters we use in Section 5 at Table 6.
We also list the hyperparameters we use for baseline algorithms. Specifically, for PGD, we set a step size of 0.05 for L2 case and 0.01 for L∞ case. For CW, we set a step size of 0.002 for L2 case step size of 0.005 for L∞ case. The confidence is set to 0 and we perform 10 times binary search for the constant starting from 0.01 (L2 case) and 0.001 (L∞ case). For EAD, we use a step size of 0.01 and the same binary search strategy as CW and β is set to be 0.001. In terms of black-box experiments, for ZOO, we set a step size of 0.01 and the initial constant is set to be 1 without binary search to achieve better query complexity. For NES-PGD, we set a step size of 0.3 for L2 case and 0.01 for L∞ case.
D ADDITIONAL EXPERIMENTS
D.1 RESNET V2 WHITE-BOX ATTACK RESULTS
In this subsection, we present the white-box attack experiments on ResNet V2 model. Tables 7 and 8 present our experimental results for L2 norm and L∞ norm based white-box attacks respectively. For the other baselines in the L2 norm case, surprisingly, CW method cannot achieve the best L2 distortion as it does in Inception V3 model. EAD method is relatively faster than CW in terms of attack time yet it has the largest distortion and a quite low success rate of 73.0%. PGD has the smallest average distortion in this setting, yet it also costs a lot of attack time. On the other hand, our proposed algorithm achieves the highest attack success rate within very short attack time with very small distortion. It significantly reduces the time complexity needed for effective attacking data with large dimensionality. For the L∞ norm case, CW method takes significantly longer time and does not perform very well on average distortion either. Our proposed white-box attack algorithm, on the other hand, again achieves the shortest average attack time and 100% success rate.
D.2 RESNET V2 BLACK-BOX ATTACK RESULTS
In this subsection, we present the black-box experiments on ResNet V2 model. We again mainly focus on evaluating attack success rate, time and number of queries needed. In previous experiments on Inception V3 model, we show the performance of different black-box attack algorithms given enough number of queries (i.e., 500,000 per attack per image). And it shows that basically all
algorithms can achieve very high attack success rate (almost 100%). Now we examine a much harder case, where we reduce the the number of allowed queries per attack per image to only 50, 000. Tables 9 and 10 present our experimental results for L2 norm and L∞ norm based black-box attacks respectively. We still set = 5 for L2 case and = 0.05 for L∞ case.
For the L2 norm case, ZOO method barely succeeds due to the strict query limit of 50, 00 while it typically requires over 106 queries to attack successfully. Our proposed Frank-Wolfe black-box attacks, on the other hand, achieve nearly 60% attack success rate under such a stringent query budget. For the L∞ norm case, both NES-PGD method and ours achieve over 90% success rate. Even though they share similar average attack time and average number of queries needed, our Frank-Wolfe based methods still achieve the best in terms of all three evaluation metrics.
Figure 3 illustrates the attack success rate against the number of queries plot for different algorithms in both L2 norm and L∞ norm based black-box attack on ResNet V2 model. Note that here we have a query limit of 50, 000, which is especially hard for the L2 norm case. As we can see from the Figure 3, our proposed Frank-Wolfe black-box attack algorithm (both options) achieves the best performance (highest attack success rate and smallest queries needed for achieving the same success rate).
D.3 VISUALIZATION EXAMPLES
For the completeness, we also provide some visual illustrations on the adversarial examples generated by various algorithms. Figure 4 shows some adversarial examples generated through different L2 norm based white-box attacks. Figure 5 shows some adversarial examples generated through different L∞ norm based black-box attacks. | 1. What is the focus of the paper regarding Frank-Wolfe algorithm?
2. What are the strengths and weaknesses of the proposed approach compared to other baseline methods?
3. Do you have any concerns about the experiments conducted in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for additional experiments or improvements to strengthen the results? | Review | Review
The paper investigates the Frank-Wolfe (FW) algorithm for constructing adversarial examples both in a white-box and black-box setting. The authors provide both a theoretical analysis (convergence to a stationary point) and experiments for an InceptionV3 network on ImageNet. The main claim is that the proposed algorithm can construct adversarial examples faster than various baselines (PGD, I-FGSM, CW, etc.), and from fewer queries in a black-box setting.
The FW algorithm is a classical method in optimization, but (to the best of my knowledge) has not yet been evaluated yet for constructing adversarial examples. Hence it is a natural question to understand whether FW performs significantly better than current algorithms in this context. Indeed, the authors find that FW is 6x - 20x faster for constructing white-box adversarial examples than a range of relevant baseline, which is a significant speed-up. However, there are several points about the experiments that are unclear to me:
- It is well known that the running times of optimization algorithms are highly dependent on various hyperparameters such as the step size. But the authors do not seem to describe how they chose the hyperparameters for the baselines algorithms. Hence it is unclear how large the running time improvement is compared to a well-tuned baseline.
- Other algorithms in the comparison achieve a better distortion (smaller perturbation). Since finding an adversarial with smaller perturbation is a harder problem, it is unclear how the algorithms compare for finding adversarial examples with similar distortion. Instead of reporting a single time-vs-distortion data point, the authors could show the full trade-off curve.
- The authors only provide running times, not the number of iterations. In principle all the algorithms should have a similar bottleneck in each iteration (computing a gradient for the input image), but it would be good to verify this with an iteration count vs success rate (or distortion) plot. This would also allow the authors to compare their theoretical iteration bound with experimental data.
In addition to these three main points, the authors could strengthen their results by providing experiments on another dataset (e.g., CIFAR-10) or model architecture (e.g., a ResNet), and by averaging over a larger number of test data points (currently 200).
Overall, I find the paper a promising contribution. But until the authors provide a more thorough experimental evaluation, I hesitate to recommend acceptance.
Additional comments:
The introduction contains a few statements that may paint an incomplete or confusing picture of the current literature in adversarial attacks on neural networks:
* The abstract claims that the poor time complexity of adversarial attacks limits their practical usefulness. However, the running time of attacks is typically measured in seconds and should not be the limiting element in real-world attacks on deep learning systems. I am not aware of a setting where the running time of an attack is the main computational bottleneck (outside adversarial training).
* The introduction distinguishes between "gradient-based methods" and "optimization-based methods". This distinction is potentially confusing to a reader since the gradient-based methods can be seen as optimization algorithms, and the optimization-based methods rely on gradients.
* The introduction claims that black-box attacks need to estimate gradients coordinate-wise. However, this is not the case already in some of the prior work that uses random directions for estimating gradients (e.g., the cited paper by Ilyas et al.)
I encourage the authors to clarify these points in an updated version of their paper. |
ICLR | Title
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
Abstract
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function’s components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
1 INTRODUCTION
1.1 IMPROVING LEARNED REPRESENTATIONS FOR GENERATIVE MODELING
Radford et al. (2015) demonstrated that Generative Adversarial Networks (GANs) are a good unsupervised technique for learning representations of images for the generative modeling of 2D images. Since then, a number of improvements have been made. First, Zhao et al. (2016) modified the error signal of the deep neural network from the original, single parameter criterion to a multi-parameter criterion using auto-encoder reconstruction loss. Berthelot et al. (2017) then further modified the loss function from a hinge loss to the Wasserstein distance between loss distributions. For each modification, the proposed changes improved the resulting output to visual inspection (see Appendix A Figure 4, Row 1 for the output of the most recent, BEGAN model). We propose a new loss function, building on the changes of the BEGAN model (called the scaled BEGAN GMSM) that further modifies the loss function to handle a broader range of image features within its internal representation.
1.2 GENERATIVE ADVERSARIAL NETWORKS
Generative Adversarial Networks are a form of two-sample or hypothesis testing that uses a classifier, called a discriminator, to distinguish between observed (training) data and data generated by the model or generator. Training is then simplified to a competing (i.e., adversarial) objective between the discriminator and generator, where the discriminator is trained to better differentiate training from generated data, and the generator is trained to better trick the discriminator into thinking its generated data is real. The convergence of a GAN is achieved when the generator and discriminator reach a Nash equilibrium, from a game theory point of view (Zhao et al., 2016).
In the original GAN specification, the task is to learn the generator’s distribution pG over data x (Goodfellow et al., 2014). To accomplish this, one defines a generator function G(z; θG), which produces an image using a noise vector z as input, and G is a differentiable function with parameters θG. The discriminator is then specified as a second function D(x; θD) that outputs a scalar representing the probability that x came from the data rather than pG. D is then trained to maximize the probability of assigning the correct labels to the data and the image output of G while G
is trained to minimize the probability that D assigns its output to the fake class, or 1 − D(G(z)). Although G and D can be any differentiable functions, we will only consider deep convolutional neural networks in what follows.
Zhao et al. (2016) initially proposed a shift from the original single-dimensional criterion—the scalar class probability—to a multidimensional criterion by constructing D as an autoencoder. The image output by the autoencoder can then be directly compared to the output of G using one of the many standard distance functions (e.g., l1 norm, mean square error). However, Zhao et al. (2016) also proposed a new interpretation of the underlying GAN architecture in terms of an energy-based model (LeCun et al., 2006).
1.3 ENERGY-BASED GENERATIVE ADVERSARIAL NETWORKS
The basic idea of energy-based models (EBMs) is to map an input space to a single scalar or set of scalars (called its “energy”) via the construction of a function (LeCun et al., 2006). Learning in this framework modifies the energy surface such that desirable pairings get low energies while undesirable pairings get high energies. This framework allows for the interpretation of the discriminator (D) as an energy function that lacks any explicit probabilistic interpretation (Zhao et al., 2016). In this view, the discriminator is a trainable cost function for the generator that assigns low energy values to regions of high data density and high energy to the opposite. The generator is then interpreted as a trainable parameterized function that produces samples in regions assigned low energy by the discriminator. To accomplish this setup, Zhao et al. (2016) first define the discriminator’s energy function as the mean square error of the reconstruction loss of the autoencoder, or:
ED(x) = ||Decoder(Encoder(x))− x|| (1)
Zhao et al. (2016) then define the loss function for their discriminator using a form of margin loss.
LD(x, z) = ED(x) + [m− ED(G(z))]+ (2)
where m is a constant and [·]+ = max(0, ·). They define the loss function for their generator:
LG(z) = ED(G(z)) (3)
The authors then prove that, if the system reaches a Nash equilibrium, then the generator will produce samples that cannot be distinguished from the dataset. Problematically, simple visual inspection can easily distinguish the generated images from the dataset.
1.4 DEFINING THE PROBLEM
It is clear that, despite the mathematical proof of Zhao et al. (2016), humans can distinguish the images generated by energy-based models from real images. There are two direct approaches that could provide insight into this problem, both of which are outlined in the original paper. The first approach that is discussed by Zhao et al. (2016) changes Equation 2 to allow for better approximations than m. The BEGAN model takes this approach. The second approach addresses Equation 1, but was only implicitly addressed when (Zhao et al., 2016) chose to change the original GAN to use the reconstruction error of an autoencoder instead of a binary logistic energy function. We chose to take the latter approach while building on the work of BEGAN.
Our main contributions are as follows:
• An energy-based formulation of BEGAN’s solution to the visual problem. • An energy-based formulation of the problems with Equation 1. • Experiments that explore the different hyper-parameters of the new energy function. • Evaluations that provide greater detail into the learned representations of the model. • A demonstration that scaled BEGAN+GMSM can be used to generate better quality images
from the CelebA dataset at 128x128 pixel resolution than the original BEGAN model in quantifiable ways.
2 IMPROVING THE ENERGY-BASED MODEL OF GANS
2.1 BOUNDARY EQUILIBRIUM GENERATIVE ADVERSARIAL NETWORKS
The Boundary Equilibrium Generative Adversarial Network (BEGAN) makes a number of modifications to the original energy-based approach. However, the most important contribution can be summarized in its changes to Equation 2. In place of the hinge loss, Berthelot et al. (2017) use the Wasserstein distance between the autoencoder reconstruction loss distributions of G and D. They also add three new hyper-parameters in place of m: kt, λk, and γ. Using an energy-based approach, we get the following new equation:
LD(x, z) = ED(x)− kt · ED(G(z)) (4)
The value of kt is then defined as:
kt+1 = kt + λk(γED(x)− ED(G(z))) for each t (5)
where kt ∈ [0, 1] is the emphasis put on E(G(z)) at training step t for the gradient of ED, λk is the learning rate for k, and γ ∈ [0, 1]. Both Equations 2 and 4 are describing the same phenomenon: the discriminator is doing well if either 1) it is properly reconstructing the real images or 2) it is detecting errors in the reconstruction of the generated images. Equation 4 just changes how the model achieves that goal. In the original equation (Equation 2), we punish the discriminator (LD → ∞) when the generated input is doing well (ED(G(z)) → 0). In Equation 4, we reward the discriminator (LD → 0) when the generated input is doing poorly (ED(G(z))→∞). What is also different between Equations 2 and 4 is the way their boundaries function. In Equation 2, m only acts as a one directional boundary that removes the impact of the generated input on the discriminator if ED(G(z)) > m. In Equation 5, γED(x) functions in a similar but more complex way by adding a dependency to ED(x). Instead of 2 conditions on either side of the boundary m, there are now four:
1. If γED(x) > ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is accelerating as kt → 1. 2. If γED(x) > ED(G(z)) and ED(G(z)) → 0, then LD → ED(x) and it is accelerating as kt → 1. 3. If γED(x) < ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is decelerating as kt → 0. 4. If γED(x) < ED(G(z)) and ED(G(z)) → 0, then LD → ∞ and it is decelerating as kt → 0.
The optimal condition is condition 1 Berthelot et al. (2017). Thus, the BEGAN model tries to keep the energy of the generated output approaching the limit of the energy of the real images. As the latter will change over the course of learning, the resulting boundary dynamically establishes an equilibrium between the energy state of the real and generated input.1
It is not particularly surprising that these modifications to Equation 2 show improvements. Zhao et al. (2016) devote an appendix section to the correct selection of m and explicitly mention that the “balance between... real and fake samples[s]” (italics theirs) is crucial to the correct selection of m. Unsurprisingly, a dynamically updated parameter that accounts for this balance is likely to be the best instantiation of the authors’ intuitions and visual inspection of the resulting output supports this (see Berthelot et al., 2017). We chose a slightly different approach to improving the proposed loss function by changing the original energy function (Equation 1).
2.2 FINDING A NEW ENERGY FUNCTION VIA IMAGE QUALITY ASSESSMENT
In the original description of the energy-based approach to GANs, the energy function was defined as the mean square error (MSE) of the reconstruction loss of the autoencoder (Equation 1). Our first
1For a much more detailed and formal account that is beyond the scope of the current paper, see (Berthelot et al., 2017).
insight was a trivial generalization of Equation 1:
E(x) = δ(D(x), x) (6) where δ is some distance function. This more general equation suggests that there are many possible distance functions that could be used to describe the reconstruction error and that the selection of δ is itself a design decision for the resulting energy and loss functions. Not surprisingly, an entire field of study exists that focuses on the construction of similar δ functions in the image domain: the field of image quality assessment (IQA).
The field of IQA focuses on evaluating the quality of digital images (Wang & Bovik, 2006). IQA is a rich and diverse field that merits substantial further study. However, for the sake of this paper, we want to emphasize three important findings from this field. First, distance functions like δ are called full-reference IQA (or FR-IQA) functions because the reconstruction (D(x)) has a ‘true’ or undistorted reference image (x) which it can be evaluated from Wang et al. (2004). Second, IQA researchers have known for a long time that MSE is a poor indicator of image quality (Wang & Bovik, 2006). And third, there are numerous other functions that are better able to indicate image quality. We explain each of these points below.
One way to view the FR-IQA approach is in terms of a reference and distortion vector. In this view, an image is represented as a vector whose dimensions correspond with the pixels of the image. The reference image sets up the initial vector from the origin, which defines the original, perfect image. The distorted image is then defined as another vector defined from the origin. The vector that maps the reference image to the distorted image is called the distortion vector and FR-IQA studies how to evaluate different types of distortion vectors. In terms of our energy-based approach and Equation 6, the distortion vector is measured by δ and it defines the surface of the energy function.
MSE is one of the ways to measure distortion vectors. It is based in a paradigm that views the loss of quality in an image in terms of the visibility of an error signal, which MSE quantifies. Problematically, it has been shown that MSE actually only defines the length of a distortion vector not its type (Wang & Bovik, 2006). For any given reference image vector, there are an entire hypersphere of other image vectors that can be reached by a distortion vector of a given size (i.e., that all have the same MSE from the reference image; see Figure 1).
A number of different measurement techniques have been created that improve upon MSE (for a review, see Chandler, 2013). Often these techniques are defined in terms of the similarity (S) between the reference and distorted image, where δ = 1−S. One of the most notable improvements is the Structural Similarity Index (SSIM), which measures the similarity of the luminance, contrast, and structure of the reference and distorted image using the following similarity function:2
S(vd,vr) = 2vdvr + C
v2d + v 2 r + C
(7)
where vd is the distorted image vector, vr is the reference image vector, C is a constant, and all multiplications occur element-wise Wang & Bovik (2006).3 This function has a number of desirable
2The SSIM similarity function is reminiscent of the Dice-Sorensen distance function. It is worth noting that the Dice-Sorensen distance function does not satisfy the triangle inequality for sets Gragera & Suppakitpaisarn (2016). Since sets are a restricted case for Equation 7, where all the values are either 0 or 1, we can conclude that the corresponding distance of Equation 7 also fails to satisfy the triangle inequality. Consequently, it is not a true distance metric.
3We use C = 0.0026 following the work on cQS described below Gupta et al. (2017).
features. It is symmetric (i.e., S(vd, vr) = S(vr, vd), bounded by 1 (and 0 for x > 0), and it has a unique maximum of 1 only when vd = vr. Although we chose not to use SSIM as our energy function (δ) as it can only handle black-and-white images, its similarity function (Equation 7) informs our chosen technique.
The above discussion provides some insights into why visual inspection fails to show this correspondence between real and generated output of the resulting models, even though Zhao et al. (2016) proved that the generator should produce samples that cannot be distinguished from the dataset. The original proof by Zhao et al. (2016) did not account for Equation 1. Thus, when Zhao et al. (2016) show that their generated output should be indistinguishable from real images, what they are actually showing is that it should be indistinguishable from the real images plus some residual distortion vector described by δ. Yet, we have just shown that MSE (the author’s chosen δ) can only constrain the length of the distortion vector, not its type. Consequently, it is entirely possible for two systems using MSE for δ to have both reached a Nash equilibrium, have the same energy distribution, and yet have radically different internal representations of the learned images. The energy function is as important as the loss function for defining the data distribution.
2.3 A NEW ENERGY FUNCTION
Rather than assume that any one distance function would suffice to represent all of the various features of real images, we chose to use a multi-component approach for defining δ. In place of the luminance, contrast, and structural similarity of SSIM, we chose to evaluate the l1 norm, the gradient magnitude similarity score (GMS), and a chrominance similarity score (Chrom). We outline the latter two in more detail below.
The GMS score and chrom scores derive from an FR-IQA model called the color Quality Score (cQS; Gupta et al., 2017). The cQS uses GMS and chrom as its two components. First, it converts images to the YIQ color space model. In this model, the three channels correspond to the luminance information (Y) and the chrominance information (I and Q). Second, GMS is used to evaluate the local gradients across the reference and distorted images on the luminance dimension in order to compare their edges. This is performed by convolving a 3 × 3 Sobel filter in both the horizontal and vertical directions of each image to get the corresponding gradients. The horizontal and vertical gradients are then collapsed to the gradient magnitude of each image using the Euclidean distance.4 The similarity between the gradient magnitudes of the reference and distorted image are then compared using Equation 7. Third, Equation 7 is used to directly compute the similarity between the I and Q color dimensions of each image. The mean is then taken of the GMS score (resulting in the GMSM score) and the combined I and Q scores (resulting in the Chrom score).
In order to experimentally evaluate how each of the different components contribute to the underlying image representations, we defined the following, multi-component energy function:
ED = ∑ δ∈D δ(D(x), x)βd∑
δ∈D βd (8)
where βd is the weight that determines the proportion of each δ to include for a given model, and D includes the l1 norm, GMSM, and the chrominance part of cQS as individual δs. In what follows, we experimentally evaluate each of the energy function components(β) and some of their combinations.
3 EXPERIMENTS
3.1 METHOD
We conducted extensive quantitative and qualitative evaluation on the CelebA dataset of face images Liu et al. (2015). This dataset has been used frequently in the past for evaluating GANs Radford et al. (2015); Zhao et al. (2016); Chen et al. (2016); Liu & Tuzel (2016). We evaluated 12 different models in a number of combinations (see Table 1). They are as follows. Models 1, 7, and 11 are the original BEGAN model. Models 2 and 3 only use the GMSM and chrominance distance functions, respectively. Models 4 and 8 are the BEGAN model plus GMSM. Models 5 and 9 use all three
4For a detailed outline of the original GMS function, see Xue et al. (2014).
10 64 0.7 2 1 0
11 128 0.7 1 0 0
12 128 0.7 2 1 0
distance functions (BEGAN+GMSM+Chrom). Models 6, 10, and 12 use a ’scaled’ BEGAN model (βl1 = 2) with GMSM. All models with different model numbers but the same βd values differ in their γ values or the output image size.
3.2 SETUP
All of the models we evaluate in this paper are based on the architecture of the BEGAN model Berthelot et al. (2017).5 We trained the models using Adam with a batch size of 16, β1 of 0.9, β2 of 0.999, and an initial learning rate of 0.00008, which decayed by a factor of 2 every 100,000 epochs.
Parameters kt and k0 were set at 0.001 and 0, respectively (see Equation 5). The γ parameter was set relative to the model (see Table 1).
Most of our experiments were performed on 64 × 64 pixel images with a single set of tests run on 128 × 128 images. The number of convolution layers were 3 and 4, respectively, with a constant down-sampled size of 8 × 8. We found that the original size of 64 for the input vector (Nz) and hidden state (Nh) resulted in modal collapse for the models using GMSM. However, we found that this was fixed by increasing the input size to 128 and 256 for the 64 and 128 pixel images, respectively. We used Nz = 128 for all models except 12 (scaled BEGAN+GMSM), which used 256. Nz always equaled Nh in all experiments.
Models 2-3 were run for 18,000 epochs, 1 and 4-10 were run for 100,000 epochs, and 11-12 were run for 300,000 epochs. Models 2-4 suffered from modal collapse immediately and 5 (BEGAN+GMSM+Chrom) collapsed around epoch 65,000 (see Appendix A Figure 4 rows 2-5).
3.3 EVALUATIONS
We performed two evaluations. First, to evaluate whether and to what extent the models were able to capture the relevant properties of each associated distance function, we compared the mean and standard deviation of the error scores. We calculated them for each distance function over all epochs of all models. We chose to use the mean rather than the minimum score as we were interested in how each model performs as a whole, rather than at some specific epoch. All calculations use the distance, or one minus the corresponding similarity score, for both the gradient magnitude and chrominance values.
Reduced pixelation is an artifact of the intensive scaling for image presentation (up to 4×). All images in the qualitative evaluations were upscaled from their original sizes using cubic image sampling so that they can be viewed at larger sizes. Consequently, the apparent smoothness of the scaled images is not a property of the model.
5The code for the model and all related experiments are currently available on Github. Links will be included post-review.
3.4 RESULTS
GANs are used to generate different types of images. Which image components are important depends on the domain of these images. Our results suggest that models used in any particular GAN application should be customized to emphasize the relevant components—there is not a one-sizefits-all component choice. We discuss the results of our four evaluations below.
3.4.1 MEANS AND STANDARD DEVIATIONS OF ERROR SCORES
Results were as expected: the three different distance functions captured different features of the underlying image representations. We compared all of the models in terms of their means and standard deviations of the error score of the associated distance functions (see Table 2). In particular, each of models 1-3 only used one of the distance functions and had the lowest error for the associated function (e.g., model 2 was trained with GMSM and has the lowest GMSM error score). Models 4-6 expanded on the first three models by examining the distance functions in different combinations. Model 5 (BEGAN+GMSM+Chrom) had the lowest chrominance error score and Model 6 (scaled BEGAN+GMSM) had the lowest scores for l1 and GMSM of any model using a γ of 0.5.
For the models with γ set at 0.7, models 7-9 showed similar results to the previous scores. Model 8 (BEGAN+GMSM) scored the lowest GMSM score overall and model 9 (BEGAN+GMSM+Chrom) scored the lowest chrominance score of the models that did not suffer from modal collapse. For the two models that were trained to generate 128 × 128 pixel images, model 12 (scaled BEGAN+GMSM) had the lowest error scores for l1 and GMSM, and model 11 (BEGAN) had the lowest score for chrominance. Model 12 had the lowest l1 score, overall.
3.4.2 VISUAL COMPARISON OF SIMILARITY SCORES
Subjective visual comparison of the gradient magnitudes in column S of Figure 2 shows there are more black pixels for model 11 (row 11D) when comparing real images before and after autoencoding. This indicates a lower similarity in the autoencoder. Model 12 (row 12D) has a higher similarity between the original and autoencoded real images as indicated by fewer black pixels. This pattern continues for the generator output (rows 11G and 12G), but with greater similarity between the gradients of the original and autoencoded images than the real images (i.e., fewer black pixels overall).
The visual comparison of chrominance and related similarity score also weakly supported our hypotheses (see Figure 3). All of the models show a strong ability to capture the I dimension (blue-red)
of the YIQ color space, but only model 9 (BEGAN+GMSM+Chrom) is able to accurately capture the relevant information in the Q dimension (green-purple).
4 OUTLOOK
We bring an energy-based formulation to the BEGAN model and some of the problems of the energy function originally proposed in Zhao et al. (2016). We proposed a new, multi-component energy function on the basis of research from the Image Quality Assessment literature. The scaled BEGAN+GMSM model produces better image representations than its competitors in ways that can be measured using subjective evaluations of the associated features (e.g., luminance gradient similarity, chrominance similarity). For future work, we would like to extend this research to encompass other datasets and FR-IQA energy functions.
B FURTHER EVALUATIONS
B.1 DIVERSITY OF LATENT SPACE
Further evidence that the models can generalize, and not merely memorize the input, can be seen in the linear interpolations in the latent space of z. In Figure 5 models 11 (BEGAN) and 12 (scaled BEGAN+GMSM) show smooth interpolation in gender, rotation, facial expression, hairstyle, and angle of the face.
B.2 THE BEGAN CONVERGENCE MEASURE
We compared the convergence measure scores for models 11 and 12 across all 300,000 epochs (see Figure 6; Berthelot et al. 2017). The convergence measure is defined as follows
Mglobal = ED(x) + |γED(x)− ED(G(z))| (9)
where the energy function is defined as per Equation 8. Due to the variance in this measure, we applied substantial Gaussian smoothing (σ = 0.9) to enhance the main trends. The output of a single generated image is also included for every 40,000 epochs, starting with epoch 20,000 and ending on epoch 300,000. Model 11 showed better (greater) convergence over the 300,000 epochs (as indicated by a lower convergence measure score). Both models continue to show that the convergence measure correlates with better images as the models converge. | 1. What is the focus of the paper, particularly regarding the proposed energy-based formulation?
2. How does the modified BEGAN model incorporate an image quality assessment term?
3. Can you elaborate on the training process and parameter settings used for the model?
4. What are the strengths and weaknesses of the paper regarding its clarity and presentation?
5. How does the reviewer assess the novelty and originality of the proposed approach?
6. What are the advantages and disadvantages of the modified BEGAN model compared to the baseline models? | Review | Review
Quick summary:
This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed.
Quality and significance:
This is quite a technical paper, written in a very compressed form and is a bit hard to follow. Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.
Clarity:
I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented.
Originality:
Seems original.
Pros:
* Interesting energy formulation and variation over BEGAN
Cons:
* Not a clear paper
* results are only partially motivated and analyzed |
ICLR | Title
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
Abstract
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function’s components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
1 INTRODUCTION
1.1 IMPROVING LEARNED REPRESENTATIONS FOR GENERATIVE MODELING
Radford et al. (2015) demonstrated that Generative Adversarial Networks (GANs) are a good unsupervised technique for learning representations of images for the generative modeling of 2D images. Since then, a number of improvements have been made. First, Zhao et al. (2016) modified the error signal of the deep neural network from the original, single parameter criterion to a multi-parameter criterion using auto-encoder reconstruction loss. Berthelot et al. (2017) then further modified the loss function from a hinge loss to the Wasserstein distance between loss distributions. For each modification, the proposed changes improved the resulting output to visual inspection (see Appendix A Figure 4, Row 1 for the output of the most recent, BEGAN model). We propose a new loss function, building on the changes of the BEGAN model (called the scaled BEGAN GMSM) that further modifies the loss function to handle a broader range of image features within its internal representation.
1.2 GENERATIVE ADVERSARIAL NETWORKS
Generative Adversarial Networks are a form of two-sample or hypothesis testing that uses a classifier, called a discriminator, to distinguish between observed (training) data and data generated by the model or generator. Training is then simplified to a competing (i.e., adversarial) objective between the discriminator and generator, where the discriminator is trained to better differentiate training from generated data, and the generator is trained to better trick the discriminator into thinking its generated data is real. The convergence of a GAN is achieved when the generator and discriminator reach a Nash equilibrium, from a game theory point of view (Zhao et al., 2016).
In the original GAN specification, the task is to learn the generator’s distribution pG over data x (Goodfellow et al., 2014). To accomplish this, one defines a generator function G(z; θG), which produces an image using a noise vector z as input, and G is a differentiable function with parameters θG. The discriminator is then specified as a second function D(x; θD) that outputs a scalar representing the probability that x came from the data rather than pG. D is then trained to maximize the probability of assigning the correct labels to the data and the image output of G while G
is trained to minimize the probability that D assigns its output to the fake class, or 1 − D(G(z)). Although G and D can be any differentiable functions, we will only consider deep convolutional neural networks in what follows.
Zhao et al. (2016) initially proposed a shift from the original single-dimensional criterion—the scalar class probability—to a multidimensional criterion by constructing D as an autoencoder. The image output by the autoencoder can then be directly compared to the output of G using one of the many standard distance functions (e.g., l1 norm, mean square error). However, Zhao et al. (2016) also proposed a new interpretation of the underlying GAN architecture in terms of an energy-based model (LeCun et al., 2006).
1.3 ENERGY-BASED GENERATIVE ADVERSARIAL NETWORKS
The basic idea of energy-based models (EBMs) is to map an input space to a single scalar or set of scalars (called its “energy”) via the construction of a function (LeCun et al., 2006). Learning in this framework modifies the energy surface such that desirable pairings get low energies while undesirable pairings get high energies. This framework allows for the interpretation of the discriminator (D) as an energy function that lacks any explicit probabilistic interpretation (Zhao et al., 2016). In this view, the discriminator is a trainable cost function for the generator that assigns low energy values to regions of high data density and high energy to the opposite. The generator is then interpreted as a trainable parameterized function that produces samples in regions assigned low energy by the discriminator. To accomplish this setup, Zhao et al. (2016) first define the discriminator’s energy function as the mean square error of the reconstruction loss of the autoencoder, or:
ED(x) = ||Decoder(Encoder(x))− x|| (1)
Zhao et al. (2016) then define the loss function for their discriminator using a form of margin loss.
LD(x, z) = ED(x) + [m− ED(G(z))]+ (2)
where m is a constant and [·]+ = max(0, ·). They define the loss function for their generator:
LG(z) = ED(G(z)) (3)
The authors then prove that, if the system reaches a Nash equilibrium, then the generator will produce samples that cannot be distinguished from the dataset. Problematically, simple visual inspection can easily distinguish the generated images from the dataset.
1.4 DEFINING THE PROBLEM
It is clear that, despite the mathematical proof of Zhao et al. (2016), humans can distinguish the images generated by energy-based models from real images. There are two direct approaches that could provide insight into this problem, both of which are outlined in the original paper. The first approach that is discussed by Zhao et al. (2016) changes Equation 2 to allow for better approximations than m. The BEGAN model takes this approach. The second approach addresses Equation 1, but was only implicitly addressed when (Zhao et al., 2016) chose to change the original GAN to use the reconstruction error of an autoencoder instead of a binary logistic energy function. We chose to take the latter approach while building on the work of BEGAN.
Our main contributions are as follows:
• An energy-based formulation of BEGAN’s solution to the visual problem. • An energy-based formulation of the problems with Equation 1. • Experiments that explore the different hyper-parameters of the new energy function. • Evaluations that provide greater detail into the learned representations of the model. • A demonstration that scaled BEGAN+GMSM can be used to generate better quality images
from the CelebA dataset at 128x128 pixel resolution than the original BEGAN model in quantifiable ways.
2 IMPROVING THE ENERGY-BASED MODEL OF GANS
2.1 BOUNDARY EQUILIBRIUM GENERATIVE ADVERSARIAL NETWORKS
The Boundary Equilibrium Generative Adversarial Network (BEGAN) makes a number of modifications to the original energy-based approach. However, the most important contribution can be summarized in its changes to Equation 2. In place of the hinge loss, Berthelot et al. (2017) use the Wasserstein distance between the autoencoder reconstruction loss distributions of G and D. They also add three new hyper-parameters in place of m: kt, λk, and γ. Using an energy-based approach, we get the following new equation:
LD(x, z) = ED(x)− kt · ED(G(z)) (4)
The value of kt is then defined as:
kt+1 = kt + λk(γED(x)− ED(G(z))) for each t (5)
where kt ∈ [0, 1] is the emphasis put on E(G(z)) at training step t for the gradient of ED, λk is the learning rate for k, and γ ∈ [0, 1]. Both Equations 2 and 4 are describing the same phenomenon: the discriminator is doing well if either 1) it is properly reconstructing the real images or 2) it is detecting errors in the reconstruction of the generated images. Equation 4 just changes how the model achieves that goal. In the original equation (Equation 2), we punish the discriminator (LD → ∞) when the generated input is doing well (ED(G(z)) → 0). In Equation 4, we reward the discriminator (LD → 0) when the generated input is doing poorly (ED(G(z))→∞). What is also different between Equations 2 and 4 is the way their boundaries function. In Equation 2, m only acts as a one directional boundary that removes the impact of the generated input on the discriminator if ED(G(z)) > m. In Equation 5, γED(x) functions in a similar but more complex way by adding a dependency to ED(x). Instead of 2 conditions on either side of the boundary m, there are now four:
1. If γED(x) > ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is accelerating as kt → 1. 2. If γED(x) > ED(G(z)) and ED(G(z)) → 0, then LD → ED(x) and it is accelerating as kt → 1. 3. If γED(x) < ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is decelerating as kt → 0. 4. If γED(x) < ED(G(z)) and ED(G(z)) → 0, then LD → ∞ and it is decelerating as kt → 0.
The optimal condition is condition 1 Berthelot et al. (2017). Thus, the BEGAN model tries to keep the energy of the generated output approaching the limit of the energy of the real images. As the latter will change over the course of learning, the resulting boundary dynamically establishes an equilibrium between the energy state of the real and generated input.1
It is not particularly surprising that these modifications to Equation 2 show improvements. Zhao et al. (2016) devote an appendix section to the correct selection of m and explicitly mention that the “balance between... real and fake samples[s]” (italics theirs) is crucial to the correct selection of m. Unsurprisingly, a dynamically updated parameter that accounts for this balance is likely to be the best instantiation of the authors’ intuitions and visual inspection of the resulting output supports this (see Berthelot et al., 2017). We chose a slightly different approach to improving the proposed loss function by changing the original energy function (Equation 1).
2.2 FINDING A NEW ENERGY FUNCTION VIA IMAGE QUALITY ASSESSMENT
In the original description of the energy-based approach to GANs, the energy function was defined as the mean square error (MSE) of the reconstruction loss of the autoencoder (Equation 1). Our first
1For a much more detailed and formal account that is beyond the scope of the current paper, see (Berthelot et al., 2017).
insight was a trivial generalization of Equation 1:
E(x) = δ(D(x), x) (6) where δ is some distance function. This more general equation suggests that there are many possible distance functions that could be used to describe the reconstruction error and that the selection of δ is itself a design decision for the resulting energy and loss functions. Not surprisingly, an entire field of study exists that focuses on the construction of similar δ functions in the image domain: the field of image quality assessment (IQA).
The field of IQA focuses on evaluating the quality of digital images (Wang & Bovik, 2006). IQA is a rich and diverse field that merits substantial further study. However, for the sake of this paper, we want to emphasize three important findings from this field. First, distance functions like δ are called full-reference IQA (or FR-IQA) functions because the reconstruction (D(x)) has a ‘true’ or undistorted reference image (x) which it can be evaluated from Wang et al. (2004). Second, IQA researchers have known for a long time that MSE is a poor indicator of image quality (Wang & Bovik, 2006). And third, there are numerous other functions that are better able to indicate image quality. We explain each of these points below.
One way to view the FR-IQA approach is in terms of a reference and distortion vector. In this view, an image is represented as a vector whose dimensions correspond with the pixels of the image. The reference image sets up the initial vector from the origin, which defines the original, perfect image. The distorted image is then defined as another vector defined from the origin. The vector that maps the reference image to the distorted image is called the distortion vector and FR-IQA studies how to evaluate different types of distortion vectors. In terms of our energy-based approach and Equation 6, the distortion vector is measured by δ and it defines the surface of the energy function.
MSE is one of the ways to measure distortion vectors. It is based in a paradigm that views the loss of quality in an image in terms of the visibility of an error signal, which MSE quantifies. Problematically, it has been shown that MSE actually only defines the length of a distortion vector not its type (Wang & Bovik, 2006). For any given reference image vector, there are an entire hypersphere of other image vectors that can be reached by a distortion vector of a given size (i.e., that all have the same MSE from the reference image; see Figure 1).
A number of different measurement techniques have been created that improve upon MSE (for a review, see Chandler, 2013). Often these techniques are defined in terms of the similarity (S) between the reference and distorted image, where δ = 1−S. One of the most notable improvements is the Structural Similarity Index (SSIM), which measures the similarity of the luminance, contrast, and structure of the reference and distorted image using the following similarity function:2
S(vd,vr) = 2vdvr + C
v2d + v 2 r + C
(7)
where vd is the distorted image vector, vr is the reference image vector, C is a constant, and all multiplications occur element-wise Wang & Bovik (2006).3 This function has a number of desirable
2The SSIM similarity function is reminiscent of the Dice-Sorensen distance function. It is worth noting that the Dice-Sorensen distance function does not satisfy the triangle inequality for sets Gragera & Suppakitpaisarn (2016). Since sets are a restricted case for Equation 7, where all the values are either 0 or 1, we can conclude that the corresponding distance of Equation 7 also fails to satisfy the triangle inequality. Consequently, it is not a true distance metric.
3We use C = 0.0026 following the work on cQS described below Gupta et al. (2017).
features. It is symmetric (i.e., S(vd, vr) = S(vr, vd), bounded by 1 (and 0 for x > 0), and it has a unique maximum of 1 only when vd = vr. Although we chose not to use SSIM as our energy function (δ) as it can only handle black-and-white images, its similarity function (Equation 7) informs our chosen technique.
The above discussion provides some insights into why visual inspection fails to show this correspondence between real and generated output of the resulting models, even though Zhao et al. (2016) proved that the generator should produce samples that cannot be distinguished from the dataset. The original proof by Zhao et al. (2016) did not account for Equation 1. Thus, when Zhao et al. (2016) show that their generated output should be indistinguishable from real images, what they are actually showing is that it should be indistinguishable from the real images plus some residual distortion vector described by δ. Yet, we have just shown that MSE (the author’s chosen δ) can only constrain the length of the distortion vector, not its type. Consequently, it is entirely possible for two systems using MSE for δ to have both reached a Nash equilibrium, have the same energy distribution, and yet have radically different internal representations of the learned images. The energy function is as important as the loss function for defining the data distribution.
2.3 A NEW ENERGY FUNCTION
Rather than assume that any one distance function would suffice to represent all of the various features of real images, we chose to use a multi-component approach for defining δ. In place of the luminance, contrast, and structural similarity of SSIM, we chose to evaluate the l1 norm, the gradient magnitude similarity score (GMS), and a chrominance similarity score (Chrom). We outline the latter two in more detail below.
The GMS score and chrom scores derive from an FR-IQA model called the color Quality Score (cQS; Gupta et al., 2017). The cQS uses GMS and chrom as its two components. First, it converts images to the YIQ color space model. In this model, the three channels correspond to the luminance information (Y) and the chrominance information (I and Q). Second, GMS is used to evaluate the local gradients across the reference and distorted images on the luminance dimension in order to compare their edges. This is performed by convolving a 3 × 3 Sobel filter in both the horizontal and vertical directions of each image to get the corresponding gradients. The horizontal and vertical gradients are then collapsed to the gradient magnitude of each image using the Euclidean distance.4 The similarity between the gradient magnitudes of the reference and distorted image are then compared using Equation 7. Third, Equation 7 is used to directly compute the similarity between the I and Q color dimensions of each image. The mean is then taken of the GMS score (resulting in the GMSM score) and the combined I and Q scores (resulting in the Chrom score).
In order to experimentally evaluate how each of the different components contribute to the underlying image representations, we defined the following, multi-component energy function:
ED = ∑ δ∈D δ(D(x), x)βd∑
δ∈D βd (8)
where βd is the weight that determines the proportion of each δ to include for a given model, and D includes the l1 norm, GMSM, and the chrominance part of cQS as individual δs. In what follows, we experimentally evaluate each of the energy function components(β) and some of their combinations.
3 EXPERIMENTS
3.1 METHOD
We conducted extensive quantitative and qualitative evaluation on the CelebA dataset of face images Liu et al. (2015). This dataset has been used frequently in the past for evaluating GANs Radford et al. (2015); Zhao et al. (2016); Chen et al. (2016); Liu & Tuzel (2016). We evaluated 12 different models in a number of combinations (see Table 1). They are as follows. Models 1, 7, and 11 are the original BEGAN model. Models 2 and 3 only use the GMSM and chrominance distance functions, respectively. Models 4 and 8 are the BEGAN model plus GMSM. Models 5 and 9 use all three
4For a detailed outline of the original GMS function, see Xue et al. (2014).
10 64 0.7 2 1 0
11 128 0.7 1 0 0
12 128 0.7 2 1 0
distance functions (BEGAN+GMSM+Chrom). Models 6, 10, and 12 use a ’scaled’ BEGAN model (βl1 = 2) with GMSM. All models with different model numbers but the same βd values differ in their γ values or the output image size.
3.2 SETUP
All of the models we evaluate in this paper are based on the architecture of the BEGAN model Berthelot et al. (2017).5 We trained the models using Adam with a batch size of 16, β1 of 0.9, β2 of 0.999, and an initial learning rate of 0.00008, which decayed by a factor of 2 every 100,000 epochs.
Parameters kt and k0 were set at 0.001 and 0, respectively (see Equation 5). The γ parameter was set relative to the model (see Table 1).
Most of our experiments were performed on 64 × 64 pixel images with a single set of tests run on 128 × 128 images. The number of convolution layers were 3 and 4, respectively, with a constant down-sampled size of 8 × 8. We found that the original size of 64 for the input vector (Nz) and hidden state (Nh) resulted in modal collapse for the models using GMSM. However, we found that this was fixed by increasing the input size to 128 and 256 for the 64 and 128 pixel images, respectively. We used Nz = 128 for all models except 12 (scaled BEGAN+GMSM), which used 256. Nz always equaled Nh in all experiments.
Models 2-3 were run for 18,000 epochs, 1 and 4-10 were run for 100,000 epochs, and 11-12 were run for 300,000 epochs. Models 2-4 suffered from modal collapse immediately and 5 (BEGAN+GMSM+Chrom) collapsed around epoch 65,000 (see Appendix A Figure 4 rows 2-5).
3.3 EVALUATIONS
We performed two evaluations. First, to evaluate whether and to what extent the models were able to capture the relevant properties of each associated distance function, we compared the mean and standard deviation of the error scores. We calculated them for each distance function over all epochs of all models. We chose to use the mean rather than the minimum score as we were interested in how each model performs as a whole, rather than at some specific epoch. All calculations use the distance, or one minus the corresponding similarity score, for both the gradient magnitude and chrominance values.
Reduced pixelation is an artifact of the intensive scaling for image presentation (up to 4×). All images in the qualitative evaluations were upscaled from their original sizes using cubic image sampling so that they can be viewed at larger sizes. Consequently, the apparent smoothness of the scaled images is not a property of the model.
5The code for the model and all related experiments are currently available on Github. Links will be included post-review.
3.4 RESULTS
GANs are used to generate different types of images. Which image components are important depends on the domain of these images. Our results suggest that models used in any particular GAN application should be customized to emphasize the relevant components—there is not a one-sizefits-all component choice. We discuss the results of our four evaluations below.
3.4.1 MEANS AND STANDARD DEVIATIONS OF ERROR SCORES
Results were as expected: the three different distance functions captured different features of the underlying image representations. We compared all of the models in terms of their means and standard deviations of the error score of the associated distance functions (see Table 2). In particular, each of models 1-3 only used one of the distance functions and had the lowest error for the associated function (e.g., model 2 was trained with GMSM and has the lowest GMSM error score). Models 4-6 expanded on the first three models by examining the distance functions in different combinations. Model 5 (BEGAN+GMSM+Chrom) had the lowest chrominance error score and Model 6 (scaled BEGAN+GMSM) had the lowest scores for l1 and GMSM of any model using a γ of 0.5.
For the models with γ set at 0.7, models 7-9 showed similar results to the previous scores. Model 8 (BEGAN+GMSM) scored the lowest GMSM score overall and model 9 (BEGAN+GMSM+Chrom) scored the lowest chrominance score of the models that did not suffer from modal collapse. For the two models that were trained to generate 128 × 128 pixel images, model 12 (scaled BEGAN+GMSM) had the lowest error scores for l1 and GMSM, and model 11 (BEGAN) had the lowest score for chrominance. Model 12 had the lowest l1 score, overall.
3.4.2 VISUAL COMPARISON OF SIMILARITY SCORES
Subjective visual comparison of the gradient magnitudes in column S of Figure 2 shows there are more black pixels for model 11 (row 11D) when comparing real images before and after autoencoding. This indicates a lower similarity in the autoencoder. Model 12 (row 12D) has a higher similarity between the original and autoencoded real images as indicated by fewer black pixels. This pattern continues for the generator output (rows 11G and 12G), but with greater similarity between the gradients of the original and autoencoded images than the real images (i.e., fewer black pixels overall).
The visual comparison of chrominance and related similarity score also weakly supported our hypotheses (see Figure 3). All of the models show a strong ability to capture the I dimension (blue-red)
of the YIQ color space, but only model 9 (BEGAN+GMSM+Chrom) is able to accurately capture the relevant information in the Q dimension (green-purple).
4 OUTLOOK
We bring an energy-based formulation to the BEGAN model and some of the problems of the energy function originally proposed in Zhao et al. (2016). We proposed a new, multi-component energy function on the basis of research from the Image Quality Assessment literature. The scaled BEGAN+GMSM model produces better image representations than its competitors in ways that can be measured using subjective evaluations of the associated features (e.g., luminance gradient similarity, chrominance similarity). For future work, we would like to extend this research to encompass other datasets and FR-IQA energy functions.
B FURTHER EVALUATIONS
B.1 DIVERSITY OF LATENT SPACE
Further evidence that the models can generalize, and not merely memorize the input, can be seen in the linear interpolations in the latent space of z. In Figure 5 models 11 (BEGAN) and 12 (scaled BEGAN+GMSM) show smooth interpolation in gender, rotation, facial expression, hairstyle, and angle of the face.
B.2 THE BEGAN CONVERGENCE MEASURE
We compared the convergence measure scores for models 11 and 12 across all 300,000 epochs (see Figure 6; Berthelot et al. 2017). The convergence measure is defined as follows
Mglobal = ED(x) + |γED(x)− ED(G(z))| (9)
where the energy function is defined as per Equation 8. Due to the variance in this measure, we applied substantial Gaussian smoothing (σ = 0.9) to enhance the main trends. The output of a single generated image is also included for every 40,000 epochs, starting with epoch 20,000 and ending on epoch 300,000. Model 11 showed better (greater) convergence over the 300,000 epochs (as indicated by a lower convergence measure score). Both models continue to show that the convergence measure correlates with better images as the models converge. | 1. What is the focus of the paper, and what are the proposed contributions?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and relevance to previous works?
3. Do you have any concerns about the experiments and their validity?
4. How do the proposed energy functions differ from existing approaches, and what are the implications of these differences?
5. Are there any limitations or potential drawbacks to the methodology or results presented in the paper? | Review | Review
This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning of different set of features and determination on whether the features are adequately represented. experiments on the using different hyper-parameters of the energy function, as well as visual inspections on the quality of the learned images, are presented.
It appears to me that the novelty of the paper is limited, in that the main approach is built on the existing BEGAN framework with certain modifications. For example, the new energy function in equation (4) larges achieves similar goal as the original energy (1) proposed by Zhao et. al (2016), except that the margin loss in (1) is changed to a re-weighted linear loss, where the dynamic weighting scheme of k_t is borrowed from the work of Berthelot et. al (2017). It is not very clear why making such changes in the energy would supposedly make the results better, and no further discussions are provided. On the other hand, the several energy component introduced are simply choices of the similarity measures as motivated from the image quality assessment, and there are probably a lot more in the literature whose application can not be deemed as a significant contribution to either theories or algorithm designs in GAN.
Many results from the experimental section rely on visual evaluations, such as in Figure~4 or 5; from these figures, it is difficult to clearly pick out the winning images. In Figure~5, for a fair evaluation on the performance of model interploations, the same human model should be used for competing methods, instead of applying different human models and different interpolation tasks in different methods. |
ICLR | Title
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks
Abstract
We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function’s components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.
1 INTRODUCTION
1.1 IMPROVING LEARNED REPRESENTATIONS FOR GENERATIVE MODELING
Radford et al. (2015) demonstrated that Generative Adversarial Networks (GANs) are a good unsupervised technique for learning representations of images for the generative modeling of 2D images. Since then, a number of improvements have been made. First, Zhao et al. (2016) modified the error signal of the deep neural network from the original, single parameter criterion to a multi-parameter criterion using auto-encoder reconstruction loss. Berthelot et al. (2017) then further modified the loss function from a hinge loss to the Wasserstein distance between loss distributions. For each modification, the proposed changes improved the resulting output to visual inspection (see Appendix A Figure 4, Row 1 for the output of the most recent, BEGAN model). We propose a new loss function, building on the changes of the BEGAN model (called the scaled BEGAN GMSM) that further modifies the loss function to handle a broader range of image features within its internal representation.
1.2 GENERATIVE ADVERSARIAL NETWORKS
Generative Adversarial Networks are a form of two-sample or hypothesis testing that uses a classifier, called a discriminator, to distinguish between observed (training) data and data generated by the model or generator. Training is then simplified to a competing (i.e., adversarial) objective between the discriminator and generator, where the discriminator is trained to better differentiate training from generated data, and the generator is trained to better trick the discriminator into thinking its generated data is real. The convergence of a GAN is achieved when the generator and discriminator reach a Nash equilibrium, from a game theory point of view (Zhao et al., 2016).
In the original GAN specification, the task is to learn the generator’s distribution pG over data x (Goodfellow et al., 2014). To accomplish this, one defines a generator function G(z; θG), which produces an image using a noise vector z as input, and G is a differentiable function with parameters θG. The discriminator is then specified as a second function D(x; θD) that outputs a scalar representing the probability that x came from the data rather than pG. D is then trained to maximize the probability of assigning the correct labels to the data and the image output of G while G
is trained to minimize the probability that D assigns its output to the fake class, or 1 − D(G(z)). Although G and D can be any differentiable functions, we will only consider deep convolutional neural networks in what follows.
Zhao et al. (2016) initially proposed a shift from the original single-dimensional criterion—the scalar class probability—to a multidimensional criterion by constructing D as an autoencoder. The image output by the autoencoder can then be directly compared to the output of G using one of the many standard distance functions (e.g., l1 norm, mean square error). However, Zhao et al. (2016) also proposed a new interpretation of the underlying GAN architecture in terms of an energy-based model (LeCun et al., 2006).
1.3 ENERGY-BASED GENERATIVE ADVERSARIAL NETWORKS
The basic idea of energy-based models (EBMs) is to map an input space to a single scalar or set of scalars (called its “energy”) via the construction of a function (LeCun et al., 2006). Learning in this framework modifies the energy surface such that desirable pairings get low energies while undesirable pairings get high energies. This framework allows for the interpretation of the discriminator (D) as an energy function that lacks any explicit probabilistic interpretation (Zhao et al., 2016). In this view, the discriminator is a trainable cost function for the generator that assigns low energy values to regions of high data density and high energy to the opposite. The generator is then interpreted as a trainable parameterized function that produces samples in regions assigned low energy by the discriminator. To accomplish this setup, Zhao et al. (2016) first define the discriminator’s energy function as the mean square error of the reconstruction loss of the autoencoder, or:
ED(x) = ||Decoder(Encoder(x))− x|| (1)
Zhao et al. (2016) then define the loss function for their discriminator using a form of margin loss.
LD(x, z) = ED(x) + [m− ED(G(z))]+ (2)
where m is a constant and [·]+ = max(0, ·). They define the loss function for their generator:
LG(z) = ED(G(z)) (3)
The authors then prove that, if the system reaches a Nash equilibrium, then the generator will produce samples that cannot be distinguished from the dataset. Problematically, simple visual inspection can easily distinguish the generated images from the dataset.
1.4 DEFINING THE PROBLEM
It is clear that, despite the mathematical proof of Zhao et al. (2016), humans can distinguish the images generated by energy-based models from real images. There are two direct approaches that could provide insight into this problem, both of which are outlined in the original paper. The first approach that is discussed by Zhao et al. (2016) changes Equation 2 to allow for better approximations than m. The BEGAN model takes this approach. The second approach addresses Equation 1, but was only implicitly addressed when (Zhao et al., 2016) chose to change the original GAN to use the reconstruction error of an autoencoder instead of a binary logistic energy function. We chose to take the latter approach while building on the work of BEGAN.
Our main contributions are as follows:
• An energy-based formulation of BEGAN’s solution to the visual problem. • An energy-based formulation of the problems with Equation 1. • Experiments that explore the different hyper-parameters of the new energy function. • Evaluations that provide greater detail into the learned representations of the model. • A demonstration that scaled BEGAN+GMSM can be used to generate better quality images
from the CelebA dataset at 128x128 pixel resolution than the original BEGAN model in quantifiable ways.
2 IMPROVING THE ENERGY-BASED MODEL OF GANS
2.1 BOUNDARY EQUILIBRIUM GENERATIVE ADVERSARIAL NETWORKS
The Boundary Equilibrium Generative Adversarial Network (BEGAN) makes a number of modifications to the original energy-based approach. However, the most important contribution can be summarized in its changes to Equation 2. In place of the hinge loss, Berthelot et al. (2017) use the Wasserstein distance between the autoencoder reconstruction loss distributions of G and D. They also add three new hyper-parameters in place of m: kt, λk, and γ. Using an energy-based approach, we get the following new equation:
LD(x, z) = ED(x)− kt · ED(G(z)) (4)
The value of kt is then defined as:
kt+1 = kt + λk(γED(x)− ED(G(z))) for each t (5)
where kt ∈ [0, 1] is the emphasis put on E(G(z)) at training step t for the gradient of ED, λk is the learning rate for k, and γ ∈ [0, 1]. Both Equations 2 and 4 are describing the same phenomenon: the discriminator is doing well if either 1) it is properly reconstructing the real images or 2) it is detecting errors in the reconstruction of the generated images. Equation 4 just changes how the model achieves that goal. In the original equation (Equation 2), we punish the discriminator (LD → ∞) when the generated input is doing well (ED(G(z)) → 0). In Equation 4, we reward the discriminator (LD → 0) when the generated input is doing poorly (ED(G(z))→∞). What is also different between Equations 2 and 4 is the way their boundaries function. In Equation 2, m only acts as a one directional boundary that removes the impact of the generated input on the discriminator if ED(G(z)) > m. In Equation 5, γED(x) functions in a similar but more complex way by adding a dependency to ED(x). Instead of 2 conditions on either side of the boundary m, there are now four:
1. If γED(x) > ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is accelerating as kt → 1. 2. If γED(x) > ED(G(z)) and ED(G(z)) → 0, then LD → ED(x) and it is accelerating as kt → 1. 3. If γED(x) < ED(G(z)) and ED(G(z)) → ∞, then LD → 0 and it is decelerating as kt → 0. 4. If γED(x) < ED(G(z)) and ED(G(z)) → 0, then LD → ∞ and it is decelerating as kt → 0.
The optimal condition is condition 1 Berthelot et al. (2017). Thus, the BEGAN model tries to keep the energy of the generated output approaching the limit of the energy of the real images. As the latter will change over the course of learning, the resulting boundary dynamically establishes an equilibrium between the energy state of the real and generated input.1
It is not particularly surprising that these modifications to Equation 2 show improvements. Zhao et al. (2016) devote an appendix section to the correct selection of m and explicitly mention that the “balance between... real and fake samples[s]” (italics theirs) is crucial to the correct selection of m. Unsurprisingly, a dynamically updated parameter that accounts for this balance is likely to be the best instantiation of the authors’ intuitions and visual inspection of the resulting output supports this (see Berthelot et al., 2017). We chose a slightly different approach to improving the proposed loss function by changing the original energy function (Equation 1).
2.2 FINDING A NEW ENERGY FUNCTION VIA IMAGE QUALITY ASSESSMENT
In the original description of the energy-based approach to GANs, the energy function was defined as the mean square error (MSE) of the reconstruction loss of the autoencoder (Equation 1). Our first
1For a much more detailed and formal account that is beyond the scope of the current paper, see (Berthelot et al., 2017).
insight was a trivial generalization of Equation 1:
E(x) = δ(D(x), x) (6) where δ is some distance function. This more general equation suggests that there are many possible distance functions that could be used to describe the reconstruction error and that the selection of δ is itself a design decision for the resulting energy and loss functions. Not surprisingly, an entire field of study exists that focuses on the construction of similar δ functions in the image domain: the field of image quality assessment (IQA).
The field of IQA focuses on evaluating the quality of digital images (Wang & Bovik, 2006). IQA is a rich and diverse field that merits substantial further study. However, for the sake of this paper, we want to emphasize three important findings from this field. First, distance functions like δ are called full-reference IQA (or FR-IQA) functions because the reconstruction (D(x)) has a ‘true’ or undistorted reference image (x) which it can be evaluated from Wang et al. (2004). Second, IQA researchers have known for a long time that MSE is a poor indicator of image quality (Wang & Bovik, 2006). And third, there are numerous other functions that are better able to indicate image quality. We explain each of these points below.
One way to view the FR-IQA approach is in terms of a reference and distortion vector. In this view, an image is represented as a vector whose dimensions correspond with the pixels of the image. The reference image sets up the initial vector from the origin, which defines the original, perfect image. The distorted image is then defined as another vector defined from the origin. The vector that maps the reference image to the distorted image is called the distortion vector and FR-IQA studies how to evaluate different types of distortion vectors. In terms of our energy-based approach and Equation 6, the distortion vector is measured by δ and it defines the surface of the energy function.
MSE is one of the ways to measure distortion vectors. It is based in a paradigm that views the loss of quality in an image in terms of the visibility of an error signal, which MSE quantifies. Problematically, it has been shown that MSE actually only defines the length of a distortion vector not its type (Wang & Bovik, 2006). For any given reference image vector, there are an entire hypersphere of other image vectors that can be reached by a distortion vector of a given size (i.e., that all have the same MSE from the reference image; see Figure 1).
A number of different measurement techniques have been created that improve upon MSE (for a review, see Chandler, 2013). Often these techniques are defined in terms of the similarity (S) between the reference and distorted image, where δ = 1−S. One of the most notable improvements is the Structural Similarity Index (SSIM), which measures the similarity of the luminance, contrast, and structure of the reference and distorted image using the following similarity function:2
S(vd,vr) = 2vdvr + C
v2d + v 2 r + C
(7)
where vd is the distorted image vector, vr is the reference image vector, C is a constant, and all multiplications occur element-wise Wang & Bovik (2006).3 This function has a number of desirable
2The SSIM similarity function is reminiscent of the Dice-Sorensen distance function. It is worth noting that the Dice-Sorensen distance function does not satisfy the triangle inequality for sets Gragera & Suppakitpaisarn (2016). Since sets are a restricted case for Equation 7, where all the values are either 0 or 1, we can conclude that the corresponding distance of Equation 7 also fails to satisfy the triangle inequality. Consequently, it is not a true distance metric.
3We use C = 0.0026 following the work on cQS described below Gupta et al. (2017).
features. It is symmetric (i.e., S(vd, vr) = S(vr, vd), bounded by 1 (and 0 for x > 0), and it has a unique maximum of 1 only when vd = vr. Although we chose not to use SSIM as our energy function (δ) as it can only handle black-and-white images, its similarity function (Equation 7) informs our chosen technique.
The above discussion provides some insights into why visual inspection fails to show this correspondence between real and generated output of the resulting models, even though Zhao et al. (2016) proved that the generator should produce samples that cannot be distinguished from the dataset. The original proof by Zhao et al. (2016) did not account for Equation 1. Thus, when Zhao et al. (2016) show that their generated output should be indistinguishable from real images, what they are actually showing is that it should be indistinguishable from the real images plus some residual distortion vector described by δ. Yet, we have just shown that MSE (the author’s chosen δ) can only constrain the length of the distortion vector, not its type. Consequently, it is entirely possible for two systems using MSE for δ to have both reached a Nash equilibrium, have the same energy distribution, and yet have radically different internal representations of the learned images. The energy function is as important as the loss function for defining the data distribution.
2.3 A NEW ENERGY FUNCTION
Rather than assume that any one distance function would suffice to represent all of the various features of real images, we chose to use a multi-component approach for defining δ. In place of the luminance, contrast, and structural similarity of SSIM, we chose to evaluate the l1 norm, the gradient magnitude similarity score (GMS), and a chrominance similarity score (Chrom). We outline the latter two in more detail below.
The GMS score and chrom scores derive from an FR-IQA model called the color Quality Score (cQS; Gupta et al., 2017). The cQS uses GMS and chrom as its two components. First, it converts images to the YIQ color space model. In this model, the three channels correspond to the luminance information (Y) and the chrominance information (I and Q). Second, GMS is used to evaluate the local gradients across the reference and distorted images on the luminance dimension in order to compare their edges. This is performed by convolving a 3 × 3 Sobel filter in both the horizontal and vertical directions of each image to get the corresponding gradients. The horizontal and vertical gradients are then collapsed to the gradient magnitude of each image using the Euclidean distance.4 The similarity between the gradient magnitudes of the reference and distorted image are then compared using Equation 7. Third, Equation 7 is used to directly compute the similarity between the I and Q color dimensions of each image. The mean is then taken of the GMS score (resulting in the GMSM score) and the combined I and Q scores (resulting in the Chrom score).
In order to experimentally evaluate how each of the different components contribute to the underlying image representations, we defined the following, multi-component energy function:
ED = ∑ δ∈D δ(D(x), x)βd∑
δ∈D βd (8)
where βd is the weight that determines the proportion of each δ to include for a given model, and D includes the l1 norm, GMSM, and the chrominance part of cQS as individual δs. In what follows, we experimentally evaluate each of the energy function components(β) and some of their combinations.
3 EXPERIMENTS
3.1 METHOD
We conducted extensive quantitative and qualitative evaluation on the CelebA dataset of face images Liu et al. (2015). This dataset has been used frequently in the past for evaluating GANs Radford et al. (2015); Zhao et al. (2016); Chen et al. (2016); Liu & Tuzel (2016). We evaluated 12 different models in a number of combinations (see Table 1). They are as follows. Models 1, 7, and 11 are the original BEGAN model. Models 2 and 3 only use the GMSM and chrominance distance functions, respectively. Models 4 and 8 are the BEGAN model plus GMSM. Models 5 and 9 use all three
4For a detailed outline of the original GMS function, see Xue et al. (2014).
10 64 0.7 2 1 0
11 128 0.7 1 0 0
12 128 0.7 2 1 0
distance functions (BEGAN+GMSM+Chrom). Models 6, 10, and 12 use a ’scaled’ BEGAN model (βl1 = 2) with GMSM. All models with different model numbers but the same βd values differ in their γ values or the output image size.
3.2 SETUP
All of the models we evaluate in this paper are based on the architecture of the BEGAN model Berthelot et al. (2017).5 We trained the models using Adam with a batch size of 16, β1 of 0.9, β2 of 0.999, and an initial learning rate of 0.00008, which decayed by a factor of 2 every 100,000 epochs.
Parameters kt and k0 were set at 0.001 and 0, respectively (see Equation 5). The γ parameter was set relative to the model (see Table 1).
Most of our experiments were performed on 64 × 64 pixel images with a single set of tests run on 128 × 128 images. The number of convolution layers were 3 and 4, respectively, with a constant down-sampled size of 8 × 8. We found that the original size of 64 for the input vector (Nz) and hidden state (Nh) resulted in modal collapse for the models using GMSM. However, we found that this was fixed by increasing the input size to 128 and 256 for the 64 and 128 pixel images, respectively. We used Nz = 128 for all models except 12 (scaled BEGAN+GMSM), which used 256. Nz always equaled Nh in all experiments.
Models 2-3 were run for 18,000 epochs, 1 and 4-10 were run for 100,000 epochs, and 11-12 were run for 300,000 epochs. Models 2-4 suffered from modal collapse immediately and 5 (BEGAN+GMSM+Chrom) collapsed around epoch 65,000 (see Appendix A Figure 4 rows 2-5).
3.3 EVALUATIONS
We performed two evaluations. First, to evaluate whether and to what extent the models were able to capture the relevant properties of each associated distance function, we compared the mean and standard deviation of the error scores. We calculated them for each distance function over all epochs of all models. We chose to use the mean rather than the minimum score as we were interested in how each model performs as a whole, rather than at some specific epoch. All calculations use the distance, or one minus the corresponding similarity score, for both the gradient magnitude and chrominance values.
Reduced pixelation is an artifact of the intensive scaling for image presentation (up to 4×). All images in the qualitative evaluations were upscaled from their original sizes using cubic image sampling so that they can be viewed at larger sizes. Consequently, the apparent smoothness of the scaled images is not a property of the model.
5The code for the model and all related experiments are currently available on Github. Links will be included post-review.
3.4 RESULTS
GANs are used to generate different types of images. Which image components are important depends on the domain of these images. Our results suggest that models used in any particular GAN application should be customized to emphasize the relevant components—there is not a one-sizefits-all component choice. We discuss the results of our four evaluations below.
3.4.1 MEANS AND STANDARD DEVIATIONS OF ERROR SCORES
Results were as expected: the three different distance functions captured different features of the underlying image representations. We compared all of the models in terms of their means and standard deviations of the error score of the associated distance functions (see Table 2). In particular, each of models 1-3 only used one of the distance functions and had the lowest error for the associated function (e.g., model 2 was trained with GMSM and has the lowest GMSM error score). Models 4-6 expanded on the first three models by examining the distance functions in different combinations. Model 5 (BEGAN+GMSM+Chrom) had the lowest chrominance error score and Model 6 (scaled BEGAN+GMSM) had the lowest scores for l1 and GMSM of any model using a γ of 0.5.
For the models with γ set at 0.7, models 7-9 showed similar results to the previous scores. Model 8 (BEGAN+GMSM) scored the lowest GMSM score overall and model 9 (BEGAN+GMSM+Chrom) scored the lowest chrominance score of the models that did not suffer from modal collapse. For the two models that were trained to generate 128 × 128 pixel images, model 12 (scaled BEGAN+GMSM) had the lowest error scores for l1 and GMSM, and model 11 (BEGAN) had the lowest score for chrominance. Model 12 had the lowest l1 score, overall.
3.4.2 VISUAL COMPARISON OF SIMILARITY SCORES
Subjective visual comparison of the gradient magnitudes in column S of Figure 2 shows there are more black pixels for model 11 (row 11D) when comparing real images before and after autoencoding. This indicates a lower similarity in the autoencoder. Model 12 (row 12D) has a higher similarity between the original and autoencoded real images as indicated by fewer black pixels. This pattern continues for the generator output (rows 11G and 12G), but with greater similarity between the gradients of the original and autoencoded images than the real images (i.e., fewer black pixels overall).
The visual comparison of chrominance and related similarity score also weakly supported our hypotheses (see Figure 3). All of the models show a strong ability to capture the I dimension (blue-red)
of the YIQ color space, but only model 9 (BEGAN+GMSM+Chrom) is able to accurately capture the relevant information in the Q dimension (green-purple).
4 OUTLOOK
We bring an energy-based formulation to the BEGAN model and some of the problems of the energy function originally proposed in Zhao et al. (2016). We proposed a new, multi-component energy function on the basis of research from the Image Quality Assessment literature. The scaled BEGAN+GMSM model produces better image representations than its competitors in ways that can be measured using subjective evaluations of the associated features (e.g., luminance gradient similarity, chrominance similarity). For future work, we would like to extend this research to encompass other datasets and FR-IQA energy functions.
B FURTHER EVALUATIONS
B.1 DIVERSITY OF LATENT SPACE
Further evidence that the models can generalize, and not merely memorize the input, can be seen in the linear interpolations in the latent space of z. In Figure 5 models 11 (BEGAN) and 12 (scaled BEGAN+GMSM) show smooth interpolation in gender, rotation, facial expression, hairstyle, and angle of the face.
B.2 THE BEGAN CONVERGENCE MEASURE
We compared the convergence measure scores for models 11 and 12 across all 300,000 epochs (see Figure 6; Berthelot et al. 2017). The convergence measure is defined as follows
Mglobal = ED(x) + |γED(x)− ED(G(z))| (9)
where the energy function is defined as per Equation 8. Due to the variance in this measure, we applied substantial Gaussian smoothing (σ = 0.9) to enhance the main trends. The output of a single generated image is also included for every 40,000 epochs, starting with epoch 20,000 and ending on epoch 300,000. Model 11 showed better (greater) convergence over the 300,000 epochs (as indicated by a lower convergence measure score). Both models continue to show that the convergence measure correlates with better images as the models converge. | 1. What is the focus of the paper, and how does it extend previous work in Boundary Equilibrium Generative Adversarial Networks (BEGANs)?
2. What are the strengths and weaknesses of the proposed energy function, inspired by the structured similarity index (SSIM), in generating realistic images?
3. How convincing are the experimental results on a single dataset, CelebA, regarding the effectiveness of the proposed approach?
4. Do you have any concerns or suggestions for improving the experimental section, particularly in terms of testing the model on multiple datasets?
5. How significant is the contribution of the paper, and do the title and claims accurately reflect the content and potential impact of the research? | Review | Review
Summary:
The paper extends the the recently proposed Boundary Equilibrium Generative Adversarial Networks (BEGANs), with the hope of generating images which are more realistic. In particular, the authors propose to change the energy function associated with the auto-encoder, from an L2 norm (a single number) to an energy function with multiple components. Their energy function is inspired by the structured similarity index (SSIM), and the three components they use are the L1 score, the gradient magnitude similarity score, and the chromium score. Using this energy function, the authors hypothesize, that it will force the generator to generate realistic images. They test their hypothesis on a single dataset, namely, the CelebA dataset.
Review:
While the idea proposed in the paper is somewhat novel and there is nothing obviously wrong about the proposed approach, I thought the paper is somewhat incremental. As a result I kind of question the impact of this result. My suspicion is reinforced by the fact that the experimental section is extremely weak. In particular the authors test their model on a single relatively straightforward dataset. Any reason why the authors did not try on other datasets involving natural images? As a result I feel that the title and the claims in the paper are somewhat misleading and premature: that the proposed techniques improves the training and evaluation of energy based gans.
Over all the paper is clearly written and easy to understand.
Based on its incremental nature and weak experiments, I'm on the margin with regards to its acceptance. Happy to change my opinion if other reviewers strongly think otherwise with good reason and are convinced about its impact. |
ICLR | Title
Learning Visual Servoing with Deep Features and Fitted Q-Iteration
Abstract
Visual servoing involves choosing actions that move a robot in response to observations from a camera, in order to reach a goal configuration in the world. Standard visual servoing approaches typically rely on manually designed features and analytical dynamics models, which limits their generalization capability and often requires extensive application-specific feature and model engineering. In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of data of the target in question, to enable quick adaptation to new targets. Our approach is based on servoing the camera in the space of learned visual features, rather than image pixels or manually-designed keypoints. We demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. A key component of our approach is to use a sample-efficient fitted Q-iteration algorithm to learn which features are best suited for the task at hand. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms. Videos are available at http://rll.berkeley.edu/visual_servoing.
1 INTRODUCTION
Visual servoing is a classic problem in robotics that requires moving a camera or robot to match a target configuration of visual features or image intensities. Many robot control tasks that combine perception and action can be posed as visual servoing, including navigation (DeSouza & Kak, 2002; Chen et al., 2006), where a robot must follow a desired path; manipulation, where the robot must servo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al., 1999; Corke, 1993; Hashimoto, 1993; Hosoda & Asada, 1994; Kragic & Christensen, 2002); and various other problems, as surveyed in Hutchinson et al. (1996). Most visual servoing methods assume access to good geometric image features (Chaumette & Hutchinson, 2006; Collewet et al., 2008; Caron et al., 2013) and require knowledge of their dynamics, which are typically obtained from domain knowledge about the system. Using such hand-designed features and models prevents exploitation of statistical regularities in the world, and requires manual engineering for each new system.
In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of
data of the target in question, so as to be easy and quick to adapt to new targets. Successful target following requires the visual servo to tolerate moderate variation in the appearance of the target, including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all such distractors typically requires a considerable amount of data. However, since a visual servo is typically specific to a particular task, it is desirable to be able to learn the servoing mechanism very quickly, using a minimum amount of data. Prior work has shown that the features learned by large convolutional neural networks on large image datasets, such as ImageNet classification (Deng et al., 2009), tend to be useful for a wide range of other visual tasks (Donahue et al., 2014). We explore whether the usefulness of such features extends to visual servoing.
To answer this question, we propose a visual servoing method that uses pre-trained features, in our case obtained from the VGG network (Simonyan & Zisserman, 2014) trained for ImageNet classification. Besides the visual features, our method uses an estimate of the feature dynamics in visual space by means of a bilinear model. This allows the visual servo to predict how motion of the robot’s camera will affect the perceived feature values. Unfortunately, servoing directly on the high-dimensional features of a pre-trained network is insufficient by itself to impart robustness on the servo: the visual servo must not only be robust to moderate visual variation, but it must also be able to pick out the target of interest (such as a car that the robot is tasked with following) from irrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedure that automatically chooses weights for the most relevant visual features. Crucially, the actual servoing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclidean distance between the weighted feature values at the next time step and the target. The form of the servoing policy in our approach leads to an analytic and tractable linear approximator for the Qfunction, which leads to a computationally efficient fitted Q-iteration algorithm. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
The environment for the synthetic car following benchmark is available online as the package CitySim3D1, and the code to reproduce our method and experiments is also available online2. Supplementary videos of all the test executions are available on the project’s website3.
2 RELATED WORK
Visual servoing is typically (but not always) performed with calibrated cameras and carefully designed visual features. Ideal features for servoing should be stable and discriminative, and much of the work on visual servoing focuses on designing stable and convergent controllers under the assumption that such features are available (Espiau et al., 2002; Mohta et al., 2014; Wilson et al., 1996). Some visual servoing methods do not require camera calibration (Jagersand et al., 1997; Yoshimi & Allen, 1994), and some recent methods operate directly on image intensities (Caron et al., 2013), but generally do not use learning to exploit statistical regularities in the world and improve robustness to distractors.
Learning is a relatively recent addition to the repertoire of visual servoing tools. Several methods have been proposed that apply ideas from reinforcement learning to directly acquire visual servoing controllers (Lampe & Riedmiller, 2013; Sadeghzadeh et al., 2015). However, such methods have not been demonstrated under extensive visual variation, and do not make use of state-of-the-art convolutional neural network visual features. Though more standard deep reinforcement learning methods (Lange et al., 2012; Mnih et al., 2013; Levine et al., 2016; Lillicrap et al., 2015) could in principle be applied to directly learn visual servoing policies, such methods tend to require large numbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visual servoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object).
1https://github.com/alexlee-gk/citysim3d 2https://github.com/alexlee-gk/visual_dynamics 3http://rll.berkeley.edu/visual_servoing
Instead, we propose an approach that combines learning of predictive models with pre-trained visual features. We use visual features trained for ImageNet (Deng et al., 2009) classification, though any pre-trained features could in principle be applicable for our method, so long as they provide a suitable degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint. Using pre-trained features allows us to avoid the need for large amounts of experience, but we must still learn the policy itself. To further accelerate this process, we first acquire a predictive model that allows the visual servo to determine how the visual features will change in response to an action. General video prediction is an active research area, with a number of complex but data-hungry models proposed in recent years (Oh et al., 2015; Watter et al., 2015; Mathieu et al., 2015; Xue et al., 2016; Lotter et al., 2016; Jia et al., 2016; Walker et al., 2016; Vondrick et al., 2016).
However, we observe that convolutional response maps can be interpreted as images and, under mild assumptions, the dynamics of image pixels during camera motion can be well approximated by means of a bilinear model (Censi & Murray, 2015). We therefore train a relatively simple bilinear model for short-term prediction of visual feature dynamics, which we can use inside a very simple visual servo that seeks to minimize the error between the next predicted feature values and a target image.
Unfortunately, simply training predictive models on top of pre-trained features is insufficient to produce an effective visual servo, since it weights the errors of distractor objects the same amount as the object of interest. We address this challenge by using an efficient Q-iteration algorithm to train the weights on the features to maximize the servo’s long-horizon reward. This method draws on ideas from regularized fitted Q-iteration (Gordon, 1995; Ernst et al., 2005; Farahmand et al., 2009) and neural fitted Q-iteration (Riedmiller, 2005) to develop a sample-efficient algorithm that can directly estimate the expected return of the visual servo without the use of any additional function approximator.
3 PROBLEM STATEMENT
Let yt be a featurization of the camera’s observations xt and let y∗ be some given goal feature map. For the purposes of this work, we define visual servoing as the problem of choosing controls ut for a fixed number of discrete time steps t as to minimize the error ‖y∗ − yt‖. We use a relatively simple gradient-based servoing policy that uses one-step feature dynamics, f : {yt,ut} → yt+1. The policy chooses the control that minimizes the distance between the goal feature map and the one-step prediction:
π(xt,x∗) = argmin u ‖y∗ − f(yt,u)‖2 . (1)
Learning this policy amounts to learning the robot dynamics and the distance metric ‖·‖. To learn the robot dynamics, we assume that we have access to a dataset of paired observations and controls xt,ut,xt+1. This data is relatively easy to obtain as it involves collecting a stream of the robot’s observations and controls. We use this dataset to learn a general visual dynamics model that can be used for any task.
To learn the distance metric, we assume that the robot interacts with the world and collects tuples of the form xt,ut, ct,xt+1,x∗. At every time step during learning, the robot observes xt and takes action ut. After the transition, the robot observes xt+1 and receives an immediate cost ct. This cost is task-specific and it quantifies how good that transition was in order to achieve the goal. At the beginning of each trajectory, the robot is given a goal observation x∗, and it is the same throughout the trajectory. We define the goal feature map to be the featurization of the goal observation. We learn the distance metric using reinforcement learning and we model the environment as a Markov Decision Process (MDP). The state of the MDP is the tuple of the current observation and the episode’s target observation, st = (xt,x∗), the action ut is the discrete-time continuous control of the robot, and the cost function maps the states and action (st,ut, st+1) to a scalar cost ct.
4 VISUAL FEATURES DYNAMICS
We learn a multiscale bilinear model to predict the visual features of the next frame given the current image from the robot’s camera and the action of the robot. An overview of the model is shown in Figure 1. The learned dynamics can then be used for visual servoing as described in Section 5.
4.1 VISUAL FEATURES
We consider both pixels and semantic features for the visual representation. We define the function h to relate the image x and its feature y = h (x). Our choice of semantic features are derived from the VGG-16 network (Simonyan & Zisserman, 2014), which is a convolutional neural network trained for large-scale image recognition on the ImageNet dataset (Deng et al., 2009). Since spatial invariance is undesirable for servoing, we remove some of the max-pooling layers and replace the convolutions that followed them with dilated convolutions, as done by Yu & Koltun (2015). The modified VGG network is shown in Figure 2. We use the model weights of the original VGG-16 network, which are publicly available as a Caffe model (Jia et al., 2014). The features that we use are the outputs of some of the intermediate convolutional layers, that have been downsampled to a 32× 32 resolution (if necessary) and standarized with respect to our training set. We use multiple resolutions of these features for servoing. The idea is that the high-resolution representations have detailed local information about the scene, while the low-resolution representations have more global information available through the image-space gradients. The features at level l of the multiscale pyramid are denoted as y(l). The features at each level are obtained from the features below through a downsampling operator d(y(l−1)) = y(l) that cuts the resolution in half.
4.2 BILINEAR DYNAMICS
The features y(l)t are used to predict the corresponding level’s features y (l) t+1 at the next time step, conditioned on the action ut, according to a prediction function f (l)(y (l) t ,ut) = ŷ (l) t+1. We use a bilinear model to represent these dynamics, motivated by prior work (Censi & Murray, 2015). In order to servo at different scales, we learn a bilinear dynamics model at each scale. We consider two variants of the bilinear model in previous work in order to reduce the number of model parameters.
The first variant uses fully connected dynamics as in previous work but models the dynamics of each channel independently. When semantic features are used, this model interprets the feature maps as
being abstract images with spatial information within a channel and different entities or factors of variation across different channels. This could potentially allow the model to handle moving objects, occlusions, and other complex phenomena.
The fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforces sparsity in the parameters. In particular, we constrain the prediction to depend only on the features that are in its local spatial neighborhood, leading to the following locally connected bilinear model:
ŷ (l) t+1,c = y (l) t,c + ∑ j ( W (l) c,j ∗ y (l) t,c +B (l) c,j ) ut,j + ( W (l) c,0 ∗ y (l) t,c +B (l) c,0 ) . (2)
The parameters are the 4-dimensional tensor W (l)c,j and the matrix B (l) c,j for each channel c, scale l, and control coordinate j. The last two terms are biases that allow to model action-independent visual changes, such as moving objects. The ∗ is the locally connected operator, which is like a convolution but with untied filter weights4.
4.3 TRAINING VISUAL FEATURE DYNAMICS MODELS
The loss that we use for training the bilinear dynamics is the sum of the losses of the predicted features at each level, ∑L l=0 `
(l), where the loss for each level l is the squared `-2 norm between the predicted features and the actual features of that level, `(l) = ‖y(l)t+1 − ŷ (l) t+1‖2.
We optimize for the dynamics while keeping the feature representation fixed. This is a supervised learning problem, which we solve with ADAM (Kingma & Ba, 2014). The training set, consisting of triplets xt,ut,xt+1, was obtained by executing a hand-coded policy that moves the robot around the target with some Gaussian noise.
5 LEARNING VISUAL SERVOING WITH REINFORCEMENT LEARNING
We propose to use a multiscale representation of semantic features for servoing. The challenge when introducing multiple scales and multi-channel feature maps for servoing is that the features do not necessarily agree on the optimal action when the goal is unattainable or the robot is far away from the goal. To do well, it’s important to use a good weighing of each of the terms in the objective. Since there are many weights, it would be impractically time-consuming to set them by hand, so we resort to learning. We want the weighted one-step lookahead objective to encourage good longterm behavior, so we want this objective to correspond to the state-action value function Q. So we propose a method for learning the weights based on fitted Q-iteration.
5.1 SERVOING WITH WEIGHTED MULTISCALE FEATURES
Instead of attempting to build an accurate predictive model for multi-step planning, we use the simple greedy servoing method in Equation (1), where we minimize the error between the target and predicted features for all the scales. Typically, only a few objects in the scene are relevant, so the errors of some channels should be penalized more than others. Similarly, features at different scales might need to be weighted differently. Thus, we use a weighting w(l)c ≥ 0 per channel c and scale l:
π(xt,x∗) = argmin u ∑ c L∑ l=0 w (l) c |y(l)·,c | ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 + ∑ j λju 2 j , (3)
where | · | denotes the cardinality operator and the constant 1/|y(l)·,c| normalizes the feature errors by its spatial resolution. We also use a separate weight λj for each control coordinate j. This optimization can be solved efficiently since the dynamics is linear in the controls (see Appendix A).
4 The locally connected operator, with a local neighborhood of nf × nf (analogous to the filter size in convolutions), is defined as:
(W ∗ y)kh,kw = kh+bnf/2c∑
ih=kh−bnf/2c kw+bnf/2c∑ iw=kw−bnf/2c Wkh,kw,ih−kh,iw−kwyih,iw .
5.2 Q-FUNCTION APPROXIMATION FOR THE WEIGHTED SERVOING POLICY
We choose a Q-value function approximator that can represent the servoing objective such that the greedy policy with respect to the Q-values results in the policy of Equation (3). In particular, we use a function approximator that is linear in the weight parameters θ> = [ w> λ> ] :
Qθ,b(st,u) = φ(st,u) >θ + b, φ (st,u) > =
[[ 1
|y(l)·,c| ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 ]> c,l [ u2j ]> j ] .
We denote the state of the MDP as st = (xt,x∗) and add a bias b to the Q-function. The servoing policy is then simply πθ(st) = argminu Qθ,b(st,u). For reinforcement learning, we optimized for the weights θ but kept the feature representation and its dynamics fixed.
5.3 LEARNING THE Q-FUNCTION WITH FITTED Q-ITERATION
Reinforcement learning methods that learn a Q-function do so by minimizing the Bellman error:∥∥∥∥Q (st,ut)− (ct + γminu Q (st+1,u) )∥∥∥∥2
2
. (4)
In fitted Q-iteration, the agent iteratively gathers a dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni of N samples according to an exploration policy, and then minimizes the Bellman error using this dataset. We use the term sampling iteration to refer to each iteration j of this procedure. At the beginning of each sampling iteration, the current policy with added Gaussian noise is used as the exploration policy.
It is typically hard or unstable to optimize for both Q-functions that appear in the Bellman error of Equation (4), so it is usually optimized by iteratively optimizing the current Q-function while keeping the target Q-function constant. However, we notice that for a given state, the action that minimizes its Q-values is the same for any non-negative scaling α of θ and for any bias b. Thus, to speed up the optimization of the Q-function, we first set α(k− 1 2 ) and b(k− 1 2 ) by jointly solving for α and b of both the current and target Q-function:
min α≥0,b
1
N N∑ i=1 ∥∥∥∥Qαθ(k−1),b (s(i)t ,u(i)t )− (c(i)t + γminu Qαθ(k−1),b (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (5)
This is similar to how, in policy evaluation, state values can be computed by solving a linear system. We regularize the parameters with an `-2 penalty, weighted by ν ≥ 0. We use the term FQI iteration to refer to each iteration k of optimizing the Bellman error, and we use the notation (k− 12 ) to denote an intermediate step between iterations (k−1) and (k). The parameters θ can then be updated with θ(k− 1 2 ) = α(k− 1 2 )θ(k−1). Then, we update θ(k) and b(k) by optimizing for θ and b of the current Q-function while keeping the parameters of the target Q-function fixed:
min θ≥0,b
1
N N∑ i=1 ∥∥∥∥Qθ,b (s(i)t ,u(i)t )− (c(i)t + γminu Qθ(k− 12 ),b(k− 12 ) (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (6)
A summary of the algorithm used to learn the feature weights is shown in Algorithm 1.
Algorithm 1 FQI with initialization of policy-independent parameters
1: procedure FQI(θ(0), σ2exploration, ν) 2: for s = 1, . . . , S do . sampling iterations 3: Gather dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni using exploration policy N (πθ(0) , σ2exploration) 4: for k = 1, . . . ,K do . FQI iterations 5: Fit α(k− 1 2 ) and b(k− 1 2 ) using (5) 6: θ(k− 1 2 ) ← α(k− 12 )θ(k−1) 7: Fit θ(k) and b(k) using (6) 8: θ(0) ← θ(K)
6 EXPERIMENTS
We evaluate the performance of the model for visual servoing in a simulated environment. The simulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom, corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks in which an autonomous quadcopter flies above a city, with the goal of following some target object (e.g., a car).
6.1 LEARNING FEATURE DYNAMICS AND WEIGHTS WITH FQI
The dynamics for each of the features were trained using a dataset of 10000 samples (corresponding to 100 trajectories) with ADAM (Kingma & Ba, 2014). A single dynamics model was learned for each feature representation for all the training cars (Figure 3). This training set was generated by executing a hand-coded policy that navigates the quadcopter around a car for 100 time steps per trajectory, while the car moves around the city.
We used the proposed FQI algorithm to learn the weightings of the features and control regularizer. At every sampling iteration, the current policy was executed with Gaussian noise to gather data from 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. The immediate cost received by the agent encodes the error of the target in image coordinates (details in Appendix B). Then, the parameters were iteratively updated by running K = 10 iterations of FQI. We ran the overall algorithm for only S = 2 sampling iterations and chose the parameters that achieved the best performance on 10 validation trajectories. These validation trajectories were obtained by randomly choosing 10 cars from the set of training cars and randomly sampling initial states, and executing the policy with the parameters of the current iteration. All the experiments share the same set of validation trajectories.
Feature Dynamics Observations from Test Executions Cost
6.2 COMPARISON OF FEATURE REPRESENTATIONS FOR SERVOING
We compare the servoing performance for various feature dynamics models, where the weights are optimized with FQI. We execute the learned policies on 100 test trajectories and report the average cost of the trajectory rollouts on Figure 5. The cost of a single trajectory is the (undiscounted) sum of costs ct. We test the policies with cars that were seen during training as well as with a set of novel cars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies.
The test trajectories were obtained by randomly sampling 100 cars (with replacement) from one of the two sets of cars, and randomly sampling initial states (which are different from the ones used for validation). For consistency and reproducibility, the same sampled cars and initial states were used across all the test experiments, and the same initial states were used for both sets of cars. These test trajectories were never used during the development of the algorithm or for choosing hyperparameters.
From these results, we notice that policies based on deeper VGG features, up to VGG conv4 3, generally achieve better performance. However, the deepest feature representation, VGG conv5 3, is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatially invariant and it might lack the necessary spatial information to differentiate among different car positions. The policies based on pixel intensities and VGG conv5 3 features perform worse on the novel cars. However, VGG features conv1 2 through conv4 3 achieve some degree of generalization on the novel cars.
We show sample trajectories in Table 1. The policy based on pixel-intensities is susceptible to occlusions and distractor objects that appear in the target image or during executions. This is because distinguishing these occlusions and distractors from the cars cannot be done using just RGB features.
6.3 COMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODS
We compare our policy using conv4 3 feature dynamics, with weights optimized by FQI, against policies that use these dynamics but with either no feature weighting or weights optimized by other algorithms.
For the case of no weighting, we use a single feature weight w but optimize the relative weighting of the controls λ with the cross entropy method (CEM) (De Boer et al., 2005). For the other cases, we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). Since the servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the policy as a neural network that has a matrix inverse operation at the output. We train this network for 2 and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these methods use the same feature representation as ours, the only difference being how the weights w and λ are chosen.
We report the average costs of these methods on the right of Figure 6. In 2 sampling iterations, the policy learned with TRPO does not improve by much, whereas our policy learned with FQI significantly outperforms the other policies. The policy learned with TRPO improves further in 50 iterations; however, the cost incurred by this policy is still about one and a half times the cost of our policy, despite using more than 100 times as many trajectories.
6.4 COMPARISON TO PRIOR METHODS
We also consider other methods that do not use the dynamics-based servoing policy that we propose. We report their average performance on the left of Figure 6.
For one of the prior methods, we train a convolutional neural network (CNN) policy end-to-end with TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fullyconnected layers, with ReLU activations except for the output layer; the convolutional layers use
16 filters (4 × 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. The policy takes in raw pixel-intensities and outputs controls.
This policy achieves a modest performance (although still worse than the policies based on conv4 3 feature dynamics) but it requires significantly more training samples than any of the other learningbased methods. We also trained CNN policies that take in extracted VGG features (without any dynamics) as inputs, but they perform worse (see Table 4 in the Appendix). This suggests that given a policy parametrization that is expressive enough and given a large number of training samples, it is better to directly provide the raw pixel-intensity images to the policy instead of extracted VGG features. This is because VGG features are not optimized for this task and their representation loses some information that is useful for servoing.
The other two prior methods use classical image-based visual servoing (IBVS) (Chaumette & Hutchinson, 2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rublee et al., 2011), or feature points extracted from a visual tracker. For the former, the target features consist of only the ORB feature points that belong to the car, and this specifies that the car is relevant for the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker (C-COT) (Danelljan et al., 2016) (the current state-of-the-art visual tracker) to get bounding boxes around the car and use the four corners of the box as the feature points for servoing. We provide the ground truth car’s bounding box of the first frame as an input to the C-COT tracker. For all of the IBVS methods, we provide the ground truth depth values of the feature points, which are used in the algorithm’s interaction matrix5.
The first method performs poorly, in part because ORB features are not discriminative enough for some of the cars, and the target feature points are sometimes matched to feature points that are not on the car. The tracker-based method achieves a relatively good performance. The gap in performance with respect to our method is in part due to the lack of car dynamics information in the IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics. It is also worth noting that the tracker-based policy runs significantly slower than our method. The open-source implementation of the C-COT tracker6 runs at about 1Hz whereas our policy based on conv4 3 features runs at about 16Hz. Most of the computation time of our method is spent computing features from the VGG network, so there is room for speedups if we use a network that is less computationally demanding.
7 DISCUSSION
Manual design of visual features and dynamics models can limit the applicability of visual servoing approaches. We described an approach that combines learned visual features with learning predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Our experiments demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. For control we propose to learn Q-values, building on fitted Q-iteration, which at execution time allows for one-step lookahead calculations that optimize long term objectives. Our method can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
ACKNOWLEDGEMENTS
This research was funded in part by the Army Research Office through the MAST program and the Berkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP.
5The term interaction matrix, or feature Jacobian, is used in the visual servo literature to denote the Jacobian of the features with respect to the control.
6https://github.com/martin-danelljan/Continuous-ConvOp
A LINEARIZATION OF THE BILINEAR DYNAMICS
The optimization of Equation (3) can be solved efficiently by using a linearization of the dynamics,
f (l)c ( y (l) t,c,u ) = f (l)c ( y (l) t,c, ū ) + J (l) t,c (u − ū) = f (l)c ( y (l) t,c,0 ) + J (l) t,cu, (7)
where J (l)t,c is the Jacobian matrix with partial derivatives ∂f(l)c ∂u (y (l) t,c, ū) and ū is the linearization point. Since the bilinear dynamics are linear with respect to the controls, this linearization is exact and the Jacobian matrix does not depend on ū. Without loss of generality, we set ū = 0.
Furthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simply doing a forward pass through the model. For the locally bilinear dynamics of Equation (2), the j-th column of the Jacobian matrix is given by
J (l) t,c,j =
∂f (l) c
∂uj (y
(l) t,c,0) =W (l) c,j ∗ y (l) t,c +B (l) c,j . (8)
B SERVOING COST FUNCTION FOR REINFORCEMENT LEARNING
The goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards, or equivalently, a policy that minimizes the expected sum of costs. The cost should be one that quantifies progress towards the goal. We define the cost function in terms of the position of the target object (in the camera’s local frame) after the action has been taken,
c(st,ut, st+1) = √( pxt+1 pzt+1 )2 + ( pyt+1 pzt+1 )2 + ( 1 pzt+1 − 1pz∗ )2 , if ||pt+1||2 ≥ τ and car in FOV
(T − t+ 1) c(·, ·, st), otherwise, (9)
where T is the maximum trajectory length. The episode terminates early if the camera is too close to the car (less than a distance τ ) or the car’s origin is outside the camera’s field of view (FOV). The car’s position at time t is pt = (p x t ,p y t ,p z t ) and the car’s target position is p∗ = (0, 0,p z ∗), both in the camera’s local frame (z-direction is forward). Our experiments use T = 100 and τ = 4m.
C EXPERIMENT DETAILS
C.1 TASK SETUP
The camera is attached to the vehicle slightly in front of the robot’s origin and facing down at an angle of π/6 rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom, corresponding to translation and yaw angle. Pitch and roll are held fixed.
In our simulations, the quadcopter follows a car that drives at 1m s−1 along city roads during training and testing. The quadcopter’s speed is limited to within 10m s−1 for each translational degree of freedom, and its angular speed is limited to within π/2 rad s−1. The simulator runs at 10Hz. For each trajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads. The quadcopter is initialized right behind the car, in the desired relative position for following. The image observed at the beginning of the trajectory is used as the goal observation.
C.2 LEARNING FEATURE DYNAMICS
The dynamics of all the features were trained using a dataset of 10000 triplets xt,ut,xt+1. The observations are 128× 128 RGB images and the actions are 4-dimensional vectors of real numbers encoding the linear and angular (yaw) velocities. The actions are normalized to between −1 and 1. The training set was generated from 100 trajectories of a quadcopter following a car around the city with some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown during learning. The generation process of each trajectory is as follows: First, a car is chosen at random from the set of available cars and it is randomly placed on one of the roads. Then, the quadcopter is placed at some random position relative to the car’s horizontal pose, which is the car’s pose that has been rotated so that the vertical axis of it and the world matches. This quadcopter position is uniformly sampled in cylindrical coordinates relative to the car’s horizontal pose, with heights in the interval 12m to 18m, and azimuthal angles in the interval −π/2 rad to π/2 rad (where the origin of the azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the car is in the middle of the image. At every time step, the robot takes an action that moves it towards a target pose, with some additive Gaussian noise (σ = 0.2). The target pose is sampled according to the same procedure as the initial pose, and it is sampled once at the beginning of each trajectory.
We try the fully and locally connected dynamics for pixel intensities to better understand the performance trade-offs when assuming locally connected dynamics. We do not use the latter for the semantic features since they are too high-dimensional for the dynamics model to fit in memory. The dynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learning rate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.
C.3 LEARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNING
We use CEM, TRPO and FQI to learn the feature weighting and report the performance of the learned policies in Table 2. We use the cost function described in Appendix B, a discount factor of γ = 0.9, and trajectories of up to 100 steps. All the algorithms used initial weights of w = 1 and λ = 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standard deviation σexploration = 0.2.
For the case of unweighted features, we use CEM to optimize for a single weight w and for the weights λ. For the case of weighted features, we use CEM to optimize for the full space of parameters, but we only do that for the pixel feature dynamics since CEM does not scale for highdimensional problems, which is the case for all the VGG features. Each iteration of CEM performs a certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisy evaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua-
Feature Dynamics Observations from Test Executions Cost
tion used the average sum of costs of 10 trajectory rollouts as its evaluation metric. The parameters of the last iteration were used for the final policy. The policies with unweighted features dynamics and the policies with pixel features dynamics were trained for 10 and 25 iterations, respectively.
We use TRPO to optimize for the full space of parameters for each of the feature dynamics we consider in this work. We use a Gaussian policy, where the mean is the servoing policy of Equation (3) and the standard deviation is fixed to σexploration = 0.2 (i.e. we do not learn the standard deviation). Since the parameters are constrained to be non-negative, we parametrize the TRPO policies with √ w and √ λ. We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of 2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. The convolutional layers use 16 filters (4 × 4, stride 2) each, the first 2 fully-connected layers use 32 hidden units each, and all the layers except for the last one use ReLU activations. The input of the baseline network are the features (either pixel intensities or VGG features) corresponding to the feature dynamics being used. The parameters of the last iteration were used for the final policy. The policies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and a step size of 0.01.
We use our proposed FQI algorithm to optimize for the weights w,λ, and surpass the other methods in terms of performance on test executions, sample efficiency, and overall computation efficiency7. The updates of the inner iteration of our algorithm are computationally efficient; since the data is fixed for a given sampling iteration, we can precompute φ (st,ut) and certain terms of φ (st+1, ·). The parameters that achieved the best performance on 10 validation trajectories were used for the final policy. The policies are trained with FQI for S = 2 sampling iterations, a batch size of 10 trajectories per sampling iteration, K = 10 inner iterations per sampling iteration, and a regularization coefficient of ν = 0.1. We found that regularization of the parameters was important for the algorithm to converge. We show sample trajectories of the resulting policies in Table 3.
The FQI algorithm often achieved most of its performance gain after the first iteration. We ran additional sampling iterations of FQI to see if the policies improved further. For each iteration, we evaluated the performance of the policies on 10 validation trajectories. We did the same for the policies trained with TRPO, and we compare the learning curves of both methods in Figure 7.
7Our policy based on conv4 3 features takes around 650 s to run K = 10 iterations of FQI for a given batch size of 10 training trajectories.
C.4 LEARNING END-TO-END SERVOING POLICIES WITH TRPO
We use TRPO to train end-to-end servoing policies for various observation modalities and report the performance of the learned policies in Table 4. The policies are trained with the set of training cars, and tested on both this set and on the set of novel cars. The observation modalities that we consider are ground truth car positions (relative to the quadcopter), images of pixel intensities from the quadcopter’s camera, and VGG features extracted from those images. Unlike our method and the other experiments, no feature dynamics are explicitly learned for these experiments.
We use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convolutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussian baseline, which is parametrized just as the corresponding Gaussian policy (but no parameters are shared between the policy and the baseline). For the policy that takes in car positions, the mean is parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the other policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4×4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each.
The CNN policies would often not converge for several randomly initialized parameters. Thus, at the beginning of training, we tried multiple random seeds until we got a policy that achieved a relatively low cost on validation trajectories, and used the best initialization for training. The MLP policy did not have this problem, so we did not have to try multiple random initializations for it. All the policies are trained with a batch size of 4000 samples, 500 iterations, and a step size of 0.01. The parameters of the last iteration were used for the final policy.
C.5 CLASSICAL IMAGE-BASED VISUAL SERVOING
Traditional visual servoing techniques (Feddema & Mitchell, 1989; Weiss et al., 1987) use the image-plane coordinates of a set of points for control. For comparison to our method, we evaluate the servoing performance of feature points derived from bounding boxes and keypoints derived from hand-engineered features, and report the costs of test executions on Table 5.
We use bounding boxes from the C-COT tracker (Danelljan et al., 2016) (the current state-of-the-art visual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the box that tightly fits around the visible portions of the car. We provide the ground truth bounding box of the first frame to the C-COT tracker to indicate that we want to track the car. We use the four corners of the box as the feature points for servoing to take into account the position and scale of the car in image coordinates.
We provide the ground truth depth values of the feature points for the interaction matrices. In classical image-based visual servoing, the control law involves the interaction matrix (also known as feature Jacobian), which is the Jacobian of the points in image space with respect to the camera’s control (see Chaumette & Hutchinson (2006) for details). The analytical feature Jacobian used in IBVS assumes that the target points are static in the world frame. This is not true for a moving car, so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of the car. This amounts to adding a non-constant translation bias to the output of the dynamics function, where the translation is the displacement due to the car’s movement of the 3-dimensional point in the camera’s reference frame. Note that this is still not exactly equivalent to having the car being static since the roads have different slopes but the pitch and roll of the quadcopter is constrained to be fixed.
For the hand-crafted features, we consider SIFT (Lowe, 2004), SURF (Bay et al., 2006) and ORB (Rublee et al., 2011) keypoints. We filter out the keypoints of the first frame that does not belong to the car and use these as the target keypoints. However, we use all the keypoints for the subsequent observations.
The servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth car dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the other methods. This is, in part, because the feature extraction and matching process introduces compounding errors. Similar results were found by Collewet & Marchand (2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities) and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURF features.
C.6 CLASSICAL POSITION-BASED VISUAL SERVOING
Position-based visual servoing (PBVS) techniques use poses of a target object for control (see Chaumette & Hutchinson (2006) for details). We evaluate the servoing performance of a few variants, and report the costs of test executions on Table 6.
Similar to our IBVS experiments, we consider a variant that uses the car pose of the next time step as a way to incorporate the ground truth car dynamics into the interaction matrix. Since the cost function is invariant to the orientation of the car, we also consider a variant where the policy only minimizes the translational part of the pose error.
These servoing policies, which use ground truth car poses, outperforms all the other policies based on images. In addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used. | 1. What is the main contribution of the paper regarding visual servoing?
2. What are the strengths of the proposed method, particularly in its mathematical analysis and experimental demonstration?
3. How does the reviewer assess the novelty and significance of the paper's contributions, especially in representing learning?
4. Are there any concerns or limitations regarding the approach, such as the use of pre-trained visual features and the policy representation?
5. How does the reviewer perceive the relevance and alignment of the paper with the conference's focus, ICLR? | Review | Review
This paper investigates the benefits of visual servoing using a learned
visual representation. The authors propose to first learn an action-conditional
bilinear model of the visual features (obtained from a pre-trained VGG net) from
which a policy can be derived using a linearization of the dynamics. A multi-scale,
multi-channel and locally-connected variant of the bilinear model is presented.
Since the bilinear model only predicts the dynamics one step ahead, the paper
proposes a weighted objective which incorporates the long-term values of the
current policy. The evaluation problem is addressed using a fitted-value approach.
The paper is well written, mathematically solid, and conceptually exhaustive.
The experiments also demonstrate the benefits of using a value-weighted objective
and is an important contribution of this paper. This paper also seems to be the
first to outline a trust-region fitted-q iteration algorithm. The use of
pre-trained visual features is also shown to help, empirically, for generalization.
Overall, I recommend this paper as it would benefit many researchers in robotics.
However, in the context of this conference, I find the contribution specifically on
the "representation" problem to be limited. It shows that a pre-trained VGG
representation is useful, but does not consider learning it end-to-end. This is not
to say that it should be end-to-end, but proportionally speaking, the paper
spends more time on the control problem than the representation learning one.
Also, the policy representation is fixed and the values are approximated
in linear form using problem-specific features. This doesn't make the paper
less valuable, but perhaps less aligned with what I think ICLR should be about. |
ICLR | Title
Learning Visual Servoing with Deep Features and Fitted Q-Iteration
Abstract
Visual servoing involves choosing actions that move a robot in response to observations from a camera, in order to reach a goal configuration in the world. Standard visual servoing approaches typically rely on manually designed features and analytical dynamics models, which limits their generalization capability and often requires extensive application-specific feature and model engineering. In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of data of the target in question, to enable quick adaptation to new targets. Our approach is based on servoing the camera in the space of learned visual features, rather than image pixels or manually-designed keypoints. We demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. A key component of our approach is to use a sample-efficient fitted Q-iteration algorithm to learn which features are best suited for the task at hand. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms. Videos are available at http://rll.berkeley.edu/visual_servoing.
1 INTRODUCTION
Visual servoing is a classic problem in robotics that requires moving a camera or robot to match a target configuration of visual features or image intensities. Many robot control tasks that combine perception and action can be posed as visual servoing, including navigation (DeSouza & Kak, 2002; Chen et al., 2006), where a robot must follow a desired path; manipulation, where the robot must servo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al., 1999; Corke, 1993; Hashimoto, 1993; Hosoda & Asada, 1994; Kragic & Christensen, 2002); and various other problems, as surveyed in Hutchinson et al. (1996). Most visual servoing methods assume access to good geometric image features (Chaumette & Hutchinson, 2006; Collewet et al., 2008; Caron et al., 2013) and require knowledge of their dynamics, which are typically obtained from domain knowledge about the system. Using such hand-designed features and models prevents exploitation of statistical regularities in the world, and requires manual engineering for each new system.
In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of
data of the target in question, so as to be easy and quick to adapt to new targets. Successful target following requires the visual servo to tolerate moderate variation in the appearance of the target, including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all such distractors typically requires a considerable amount of data. However, since a visual servo is typically specific to a particular task, it is desirable to be able to learn the servoing mechanism very quickly, using a minimum amount of data. Prior work has shown that the features learned by large convolutional neural networks on large image datasets, such as ImageNet classification (Deng et al., 2009), tend to be useful for a wide range of other visual tasks (Donahue et al., 2014). We explore whether the usefulness of such features extends to visual servoing.
To answer this question, we propose a visual servoing method that uses pre-trained features, in our case obtained from the VGG network (Simonyan & Zisserman, 2014) trained for ImageNet classification. Besides the visual features, our method uses an estimate of the feature dynamics in visual space by means of a bilinear model. This allows the visual servo to predict how motion of the robot’s camera will affect the perceived feature values. Unfortunately, servoing directly on the high-dimensional features of a pre-trained network is insufficient by itself to impart robustness on the servo: the visual servo must not only be robust to moderate visual variation, but it must also be able to pick out the target of interest (such as a car that the robot is tasked with following) from irrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedure that automatically chooses weights for the most relevant visual features. Crucially, the actual servoing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclidean distance between the weighted feature values at the next time step and the target. The form of the servoing policy in our approach leads to an analytic and tractable linear approximator for the Qfunction, which leads to a computationally efficient fitted Q-iteration algorithm. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
The environment for the synthetic car following benchmark is available online as the package CitySim3D1, and the code to reproduce our method and experiments is also available online2. Supplementary videos of all the test executions are available on the project’s website3.
2 RELATED WORK
Visual servoing is typically (but not always) performed with calibrated cameras and carefully designed visual features. Ideal features for servoing should be stable and discriminative, and much of the work on visual servoing focuses on designing stable and convergent controllers under the assumption that such features are available (Espiau et al., 2002; Mohta et al., 2014; Wilson et al., 1996). Some visual servoing methods do not require camera calibration (Jagersand et al., 1997; Yoshimi & Allen, 1994), and some recent methods operate directly on image intensities (Caron et al., 2013), but generally do not use learning to exploit statistical regularities in the world and improve robustness to distractors.
Learning is a relatively recent addition to the repertoire of visual servoing tools. Several methods have been proposed that apply ideas from reinforcement learning to directly acquire visual servoing controllers (Lampe & Riedmiller, 2013; Sadeghzadeh et al., 2015). However, such methods have not been demonstrated under extensive visual variation, and do not make use of state-of-the-art convolutional neural network visual features. Though more standard deep reinforcement learning methods (Lange et al., 2012; Mnih et al., 2013; Levine et al., 2016; Lillicrap et al., 2015) could in principle be applied to directly learn visual servoing policies, such methods tend to require large numbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visual servoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object).
1https://github.com/alexlee-gk/citysim3d 2https://github.com/alexlee-gk/visual_dynamics 3http://rll.berkeley.edu/visual_servoing
Instead, we propose an approach that combines learning of predictive models with pre-trained visual features. We use visual features trained for ImageNet (Deng et al., 2009) classification, though any pre-trained features could in principle be applicable for our method, so long as they provide a suitable degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint. Using pre-trained features allows us to avoid the need for large amounts of experience, but we must still learn the policy itself. To further accelerate this process, we first acquire a predictive model that allows the visual servo to determine how the visual features will change in response to an action. General video prediction is an active research area, with a number of complex but data-hungry models proposed in recent years (Oh et al., 2015; Watter et al., 2015; Mathieu et al., 2015; Xue et al., 2016; Lotter et al., 2016; Jia et al., 2016; Walker et al., 2016; Vondrick et al., 2016).
However, we observe that convolutional response maps can be interpreted as images and, under mild assumptions, the dynamics of image pixels during camera motion can be well approximated by means of a bilinear model (Censi & Murray, 2015). We therefore train a relatively simple bilinear model for short-term prediction of visual feature dynamics, which we can use inside a very simple visual servo that seeks to minimize the error between the next predicted feature values and a target image.
Unfortunately, simply training predictive models on top of pre-trained features is insufficient to produce an effective visual servo, since it weights the errors of distractor objects the same amount as the object of interest. We address this challenge by using an efficient Q-iteration algorithm to train the weights on the features to maximize the servo’s long-horizon reward. This method draws on ideas from regularized fitted Q-iteration (Gordon, 1995; Ernst et al., 2005; Farahmand et al., 2009) and neural fitted Q-iteration (Riedmiller, 2005) to develop a sample-efficient algorithm that can directly estimate the expected return of the visual servo without the use of any additional function approximator.
3 PROBLEM STATEMENT
Let yt be a featurization of the camera’s observations xt and let y∗ be some given goal feature map. For the purposes of this work, we define visual servoing as the problem of choosing controls ut for a fixed number of discrete time steps t as to minimize the error ‖y∗ − yt‖. We use a relatively simple gradient-based servoing policy that uses one-step feature dynamics, f : {yt,ut} → yt+1. The policy chooses the control that minimizes the distance between the goal feature map and the one-step prediction:
π(xt,x∗) = argmin u ‖y∗ − f(yt,u)‖2 . (1)
Learning this policy amounts to learning the robot dynamics and the distance metric ‖·‖. To learn the robot dynamics, we assume that we have access to a dataset of paired observations and controls xt,ut,xt+1. This data is relatively easy to obtain as it involves collecting a stream of the robot’s observations and controls. We use this dataset to learn a general visual dynamics model that can be used for any task.
To learn the distance metric, we assume that the robot interacts with the world and collects tuples of the form xt,ut, ct,xt+1,x∗. At every time step during learning, the robot observes xt and takes action ut. After the transition, the robot observes xt+1 and receives an immediate cost ct. This cost is task-specific and it quantifies how good that transition was in order to achieve the goal. At the beginning of each trajectory, the robot is given a goal observation x∗, and it is the same throughout the trajectory. We define the goal feature map to be the featurization of the goal observation. We learn the distance metric using reinforcement learning and we model the environment as a Markov Decision Process (MDP). The state of the MDP is the tuple of the current observation and the episode’s target observation, st = (xt,x∗), the action ut is the discrete-time continuous control of the robot, and the cost function maps the states and action (st,ut, st+1) to a scalar cost ct.
4 VISUAL FEATURES DYNAMICS
We learn a multiscale bilinear model to predict the visual features of the next frame given the current image from the robot’s camera and the action of the robot. An overview of the model is shown in Figure 1. The learned dynamics can then be used for visual servoing as described in Section 5.
4.1 VISUAL FEATURES
We consider both pixels and semantic features for the visual representation. We define the function h to relate the image x and its feature y = h (x). Our choice of semantic features are derived from the VGG-16 network (Simonyan & Zisserman, 2014), which is a convolutional neural network trained for large-scale image recognition on the ImageNet dataset (Deng et al., 2009). Since spatial invariance is undesirable for servoing, we remove some of the max-pooling layers and replace the convolutions that followed them with dilated convolutions, as done by Yu & Koltun (2015). The modified VGG network is shown in Figure 2. We use the model weights of the original VGG-16 network, which are publicly available as a Caffe model (Jia et al., 2014). The features that we use are the outputs of some of the intermediate convolutional layers, that have been downsampled to a 32× 32 resolution (if necessary) and standarized with respect to our training set. We use multiple resolutions of these features for servoing. The idea is that the high-resolution representations have detailed local information about the scene, while the low-resolution representations have more global information available through the image-space gradients. The features at level l of the multiscale pyramid are denoted as y(l). The features at each level are obtained from the features below through a downsampling operator d(y(l−1)) = y(l) that cuts the resolution in half.
4.2 BILINEAR DYNAMICS
The features y(l)t are used to predict the corresponding level’s features y (l) t+1 at the next time step, conditioned on the action ut, according to a prediction function f (l)(y (l) t ,ut) = ŷ (l) t+1. We use a bilinear model to represent these dynamics, motivated by prior work (Censi & Murray, 2015). In order to servo at different scales, we learn a bilinear dynamics model at each scale. We consider two variants of the bilinear model in previous work in order to reduce the number of model parameters.
The first variant uses fully connected dynamics as in previous work but models the dynamics of each channel independently. When semantic features are used, this model interprets the feature maps as
being abstract images with spatial information within a channel and different entities or factors of variation across different channels. This could potentially allow the model to handle moving objects, occlusions, and other complex phenomena.
The fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforces sparsity in the parameters. In particular, we constrain the prediction to depend only on the features that are in its local spatial neighborhood, leading to the following locally connected bilinear model:
ŷ (l) t+1,c = y (l) t,c + ∑ j ( W (l) c,j ∗ y (l) t,c +B (l) c,j ) ut,j + ( W (l) c,0 ∗ y (l) t,c +B (l) c,0 ) . (2)
The parameters are the 4-dimensional tensor W (l)c,j and the matrix B (l) c,j for each channel c, scale l, and control coordinate j. The last two terms are biases that allow to model action-independent visual changes, such as moving objects. The ∗ is the locally connected operator, which is like a convolution but with untied filter weights4.
4.3 TRAINING VISUAL FEATURE DYNAMICS MODELS
The loss that we use for training the bilinear dynamics is the sum of the losses of the predicted features at each level, ∑L l=0 `
(l), where the loss for each level l is the squared `-2 norm between the predicted features and the actual features of that level, `(l) = ‖y(l)t+1 − ŷ (l) t+1‖2.
We optimize for the dynamics while keeping the feature representation fixed. This is a supervised learning problem, which we solve with ADAM (Kingma & Ba, 2014). The training set, consisting of triplets xt,ut,xt+1, was obtained by executing a hand-coded policy that moves the robot around the target with some Gaussian noise.
5 LEARNING VISUAL SERVOING WITH REINFORCEMENT LEARNING
We propose to use a multiscale representation of semantic features for servoing. The challenge when introducing multiple scales and multi-channel feature maps for servoing is that the features do not necessarily agree on the optimal action when the goal is unattainable or the robot is far away from the goal. To do well, it’s important to use a good weighing of each of the terms in the objective. Since there are many weights, it would be impractically time-consuming to set them by hand, so we resort to learning. We want the weighted one-step lookahead objective to encourage good longterm behavior, so we want this objective to correspond to the state-action value function Q. So we propose a method for learning the weights based on fitted Q-iteration.
5.1 SERVOING WITH WEIGHTED MULTISCALE FEATURES
Instead of attempting to build an accurate predictive model for multi-step planning, we use the simple greedy servoing method in Equation (1), where we minimize the error between the target and predicted features for all the scales. Typically, only a few objects in the scene are relevant, so the errors of some channels should be penalized more than others. Similarly, features at different scales might need to be weighted differently. Thus, we use a weighting w(l)c ≥ 0 per channel c and scale l:
π(xt,x∗) = argmin u ∑ c L∑ l=0 w (l) c |y(l)·,c | ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 + ∑ j λju 2 j , (3)
where | · | denotes the cardinality operator and the constant 1/|y(l)·,c| normalizes the feature errors by its spatial resolution. We also use a separate weight λj for each control coordinate j. This optimization can be solved efficiently since the dynamics is linear in the controls (see Appendix A).
4 The locally connected operator, with a local neighborhood of nf × nf (analogous to the filter size in convolutions), is defined as:
(W ∗ y)kh,kw = kh+bnf/2c∑
ih=kh−bnf/2c kw+bnf/2c∑ iw=kw−bnf/2c Wkh,kw,ih−kh,iw−kwyih,iw .
5.2 Q-FUNCTION APPROXIMATION FOR THE WEIGHTED SERVOING POLICY
We choose a Q-value function approximator that can represent the servoing objective such that the greedy policy with respect to the Q-values results in the policy of Equation (3). In particular, we use a function approximator that is linear in the weight parameters θ> = [ w> λ> ] :
Qθ,b(st,u) = φ(st,u) >θ + b, φ (st,u) > =
[[ 1
|y(l)·,c| ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 ]> c,l [ u2j ]> j ] .
We denote the state of the MDP as st = (xt,x∗) and add a bias b to the Q-function. The servoing policy is then simply πθ(st) = argminu Qθ,b(st,u). For reinforcement learning, we optimized for the weights θ but kept the feature representation and its dynamics fixed.
5.3 LEARNING THE Q-FUNCTION WITH FITTED Q-ITERATION
Reinforcement learning methods that learn a Q-function do so by minimizing the Bellman error:∥∥∥∥Q (st,ut)− (ct + γminu Q (st+1,u) )∥∥∥∥2
2
. (4)
In fitted Q-iteration, the agent iteratively gathers a dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni of N samples according to an exploration policy, and then minimizes the Bellman error using this dataset. We use the term sampling iteration to refer to each iteration j of this procedure. At the beginning of each sampling iteration, the current policy with added Gaussian noise is used as the exploration policy.
It is typically hard or unstable to optimize for both Q-functions that appear in the Bellman error of Equation (4), so it is usually optimized by iteratively optimizing the current Q-function while keeping the target Q-function constant. However, we notice that for a given state, the action that minimizes its Q-values is the same for any non-negative scaling α of θ and for any bias b. Thus, to speed up the optimization of the Q-function, we first set α(k− 1 2 ) and b(k− 1 2 ) by jointly solving for α and b of both the current and target Q-function:
min α≥0,b
1
N N∑ i=1 ∥∥∥∥Qαθ(k−1),b (s(i)t ,u(i)t )− (c(i)t + γminu Qαθ(k−1),b (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (5)
This is similar to how, in policy evaluation, state values can be computed by solving a linear system. We regularize the parameters with an `-2 penalty, weighted by ν ≥ 0. We use the term FQI iteration to refer to each iteration k of optimizing the Bellman error, and we use the notation (k− 12 ) to denote an intermediate step between iterations (k−1) and (k). The parameters θ can then be updated with θ(k− 1 2 ) = α(k− 1 2 )θ(k−1). Then, we update θ(k) and b(k) by optimizing for θ and b of the current Q-function while keeping the parameters of the target Q-function fixed:
min θ≥0,b
1
N N∑ i=1 ∥∥∥∥Qθ,b (s(i)t ,u(i)t )− (c(i)t + γminu Qθ(k− 12 ),b(k− 12 ) (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (6)
A summary of the algorithm used to learn the feature weights is shown in Algorithm 1.
Algorithm 1 FQI with initialization of policy-independent parameters
1: procedure FQI(θ(0), σ2exploration, ν) 2: for s = 1, . . . , S do . sampling iterations 3: Gather dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni using exploration policy N (πθ(0) , σ2exploration) 4: for k = 1, . . . ,K do . FQI iterations 5: Fit α(k− 1 2 ) and b(k− 1 2 ) using (5) 6: θ(k− 1 2 ) ← α(k− 12 )θ(k−1) 7: Fit θ(k) and b(k) using (6) 8: θ(0) ← θ(K)
6 EXPERIMENTS
We evaluate the performance of the model for visual servoing in a simulated environment. The simulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom, corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks in which an autonomous quadcopter flies above a city, with the goal of following some target object (e.g., a car).
6.1 LEARNING FEATURE DYNAMICS AND WEIGHTS WITH FQI
The dynamics for each of the features were trained using a dataset of 10000 samples (corresponding to 100 trajectories) with ADAM (Kingma & Ba, 2014). A single dynamics model was learned for each feature representation for all the training cars (Figure 3). This training set was generated by executing a hand-coded policy that navigates the quadcopter around a car for 100 time steps per trajectory, while the car moves around the city.
We used the proposed FQI algorithm to learn the weightings of the features and control regularizer. At every sampling iteration, the current policy was executed with Gaussian noise to gather data from 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. The immediate cost received by the agent encodes the error of the target in image coordinates (details in Appendix B). Then, the parameters were iteratively updated by running K = 10 iterations of FQI. We ran the overall algorithm for only S = 2 sampling iterations and chose the parameters that achieved the best performance on 10 validation trajectories. These validation trajectories were obtained by randomly choosing 10 cars from the set of training cars and randomly sampling initial states, and executing the policy with the parameters of the current iteration. All the experiments share the same set of validation trajectories.
Feature Dynamics Observations from Test Executions Cost
6.2 COMPARISON OF FEATURE REPRESENTATIONS FOR SERVOING
We compare the servoing performance for various feature dynamics models, where the weights are optimized with FQI. We execute the learned policies on 100 test trajectories and report the average cost of the trajectory rollouts on Figure 5. The cost of a single trajectory is the (undiscounted) sum of costs ct. We test the policies with cars that were seen during training as well as with a set of novel cars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies.
The test trajectories were obtained by randomly sampling 100 cars (with replacement) from one of the two sets of cars, and randomly sampling initial states (which are different from the ones used for validation). For consistency and reproducibility, the same sampled cars and initial states were used across all the test experiments, and the same initial states were used for both sets of cars. These test trajectories were never used during the development of the algorithm or for choosing hyperparameters.
From these results, we notice that policies based on deeper VGG features, up to VGG conv4 3, generally achieve better performance. However, the deepest feature representation, VGG conv5 3, is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatially invariant and it might lack the necessary spatial information to differentiate among different car positions. The policies based on pixel intensities and VGG conv5 3 features perform worse on the novel cars. However, VGG features conv1 2 through conv4 3 achieve some degree of generalization on the novel cars.
We show sample trajectories in Table 1. The policy based on pixel-intensities is susceptible to occlusions and distractor objects that appear in the target image or during executions. This is because distinguishing these occlusions and distractors from the cars cannot be done using just RGB features.
6.3 COMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODS
We compare our policy using conv4 3 feature dynamics, with weights optimized by FQI, against policies that use these dynamics but with either no feature weighting or weights optimized by other algorithms.
For the case of no weighting, we use a single feature weight w but optimize the relative weighting of the controls λ with the cross entropy method (CEM) (De Boer et al., 2005). For the other cases, we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). Since the servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the policy as a neural network that has a matrix inverse operation at the output. We train this network for 2 and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these methods use the same feature representation as ours, the only difference being how the weights w and λ are chosen.
We report the average costs of these methods on the right of Figure 6. In 2 sampling iterations, the policy learned with TRPO does not improve by much, whereas our policy learned with FQI significantly outperforms the other policies. The policy learned with TRPO improves further in 50 iterations; however, the cost incurred by this policy is still about one and a half times the cost of our policy, despite using more than 100 times as many trajectories.
6.4 COMPARISON TO PRIOR METHODS
We also consider other methods that do not use the dynamics-based servoing policy that we propose. We report their average performance on the left of Figure 6.
For one of the prior methods, we train a convolutional neural network (CNN) policy end-to-end with TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fullyconnected layers, with ReLU activations except for the output layer; the convolutional layers use
16 filters (4 × 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. The policy takes in raw pixel-intensities and outputs controls.
This policy achieves a modest performance (although still worse than the policies based on conv4 3 feature dynamics) but it requires significantly more training samples than any of the other learningbased methods. We also trained CNN policies that take in extracted VGG features (without any dynamics) as inputs, but they perform worse (see Table 4 in the Appendix). This suggests that given a policy parametrization that is expressive enough and given a large number of training samples, it is better to directly provide the raw pixel-intensity images to the policy instead of extracted VGG features. This is because VGG features are not optimized for this task and their representation loses some information that is useful for servoing.
The other two prior methods use classical image-based visual servoing (IBVS) (Chaumette & Hutchinson, 2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rublee et al., 2011), or feature points extracted from a visual tracker. For the former, the target features consist of only the ORB feature points that belong to the car, and this specifies that the car is relevant for the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker (C-COT) (Danelljan et al., 2016) (the current state-of-the-art visual tracker) to get bounding boxes around the car and use the four corners of the box as the feature points for servoing. We provide the ground truth car’s bounding box of the first frame as an input to the C-COT tracker. For all of the IBVS methods, we provide the ground truth depth values of the feature points, which are used in the algorithm’s interaction matrix5.
The first method performs poorly, in part because ORB features are not discriminative enough for some of the cars, and the target feature points are sometimes matched to feature points that are not on the car. The tracker-based method achieves a relatively good performance. The gap in performance with respect to our method is in part due to the lack of car dynamics information in the IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics. It is also worth noting that the tracker-based policy runs significantly slower than our method. The open-source implementation of the C-COT tracker6 runs at about 1Hz whereas our policy based on conv4 3 features runs at about 16Hz. Most of the computation time of our method is spent computing features from the VGG network, so there is room for speedups if we use a network that is less computationally demanding.
7 DISCUSSION
Manual design of visual features and dynamics models can limit the applicability of visual servoing approaches. We described an approach that combines learned visual features with learning predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Our experiments demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. For control we propose to learn Q-values, building on fitted Q-iteration, which at execution time allows for one-step lookahead calculations that optimize long term objectives. Our method can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
ACKNOWLEDGEMENTS
This research was funded in part by the Army Research Office through the MAST program and the Berkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP.
5The term interaction matrix, or feature Jacobian, is used in the visual servo literature to denote the Jacobian of the features with respect to the control.
6https://github.com/martin-danelljan/Continuous-ConvOp
A LINEARIZATION OF THE BILINEAR DYNAMICS
The optimization of Equation (3) can be solved efficiently by using a linearization of the dynamics,
f (l)c ( y (l) t,c,u ) = f (l)c ( y (l) t,c, ū ) + J (l) t,c (u − ū) = f (l)c ( y (l) t,c,0 ) + J (l) t,cu, (7)
where J (l)t,c is the Jacobian matrix with partial derivatives ∂f(l)c ∂u (y (l) t,c, ū) and ū is the linearization point. Since the bilinear dynamics are linear with respect to the controls, this linearization is exact and the Jacobian matrix does not depend on ū. Without loss of generality, we set ū = 0.
Furthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simply doing a forward pass through the model. For the locally bilinear dynamics of Equation (2), the j-th column of the Jacobian matrix is given by
J (l) t,c,j =
∂f (l) c
∂uj (y
(l) t,c,0) =W (l) c,j ∗ y (l) t,c +B (l) c,j . (8)
B SERVOING COST FUNCTION FOR REINFORCEMENT LEARNING
The goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards, or equivalently, a policy that minimizes the expected sum of costs. The cost should be one that quantifies progress towards the goal. We define the cost function in terms of the position of the target object (in the camera’s local frame) after the action has been taken,
c(st,ut, st+1) = √( pxt+1 pzt+1 )2 + ( pyt+1 pzt+1 )2 + ( 1 pzt+1 − 1pz∗ )2 , if ||pt+1||2 ≥ τ and car in FOV
(T − t+ 1) c(·, ·, st), otherwise, (9)
where T is the maximum trajectory length. The episode terminates early if the camera is too close to the car (less than a distance τ ) or the car’s origin is outside the camera’s field of view (FOV). The car’s position at time t is pt = (p x t ,p y t ,p z t ) and the car’s target position is p∗ = (0, 0,p z ∗), both in the camera’s local frame (z-direction is forward). Our experiments use T = 100 and τ = 4m.
C EXPERIMENT DETAILS
C.1 TASK SETUP
The camera is attached to the vehicle slightly in front of the robot’s origin and facing down at an angle of π/6 rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom, corresponding to translation and yaw angle. Pitch and roll are held fixed.
In our simulations, the quadcopter follows a car that drives at 1m s−1 along city roads during training and testing. The quadcopter’s speed is limited to within 10m s−1 for each translational degree of freedom, and its angular speed is limited to within π/2 rad s−1. The simulator runs at 10Hz. For each trajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads. The quadcopter is initialized right behind the car, in the desired relative position for following. The image observed at the beginning of the trajectory is used as the goal observation.
C.2 LEARNING FEATURE DYNAMICS
The dynamics of all the features were trained using a dataset of 10000 triplets xt,ut,xt+1. The observations are 128× 128 RGB images and the actions are 4-dimensional vectors of real numbers encoding the linear and angular (yaw) velocities. The actions are normalized to between −1 and 1. The training set was generated from 100 trajectories of a quadcopter following a car around the city with some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown during learning. The generation process of each trajectory is as follows: First, a car is chosen at random from the set of available cars and it is randomly placed on one of the roads. Then, the quadcopter is placed at some random position relative to the car’s horizontal pose, which is the car’s pose that has been rotated so that the vertical axis of it and the world matches. This quadcopter position is uniformly sampled in cylindrical coordinates relative to the car’s horizontal pose, with heights in the interval 12m to 18m, and azimuthal angles in the interval −π/2 rad to π/2 rad (where the origin of the azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the car is in the middle of the image. At every time step, the robot takes an action that moves it towards a target pose, with some additive Gaussian noise (σ = 0.2). The target pose is sampled according to the same procedure as the initial pose, and it is sampled once at the beginning of each trajectory.
We try the fully and locally connected dynamics for pixel intensities to better understand the performance trade-offs when assuming locally connected dynamics. We do not use the latter for the semantic features since they are too high-dimensional for the dynamics model to fit in memory. The dynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learning rate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.
C.3 LEARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNING
We use CEM, TRPO and FQI to learn the feature weighting and report the performance of the learned policies in Table 2. We use the cost function described in Appendix B, a discount factor of γ = 0.9, and trajectories of up to 100 steps. All the algorithms used initial weights of w = 1 and λ = 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standard deviation σexploration = 0.2.
For the case of unweighted features, we use CEM to optimize for a single weight w and for the weights λ. For the case of weighted features, we use CEM to optimize for the full space of parameters, but we only do that for the pixel feature dynamics since CEM does not scale for highdimensional problems, which is the case for all the VGG features. Each iteration of CEM performs a certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisy evaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua-
Feature Dynamics Observations from Test Executions Cost
tion used the average sum of costs of 10 trajectory rollouts as its evaluation metric. The parameters of the last iteration were used for the final policy. The policies with unweighted features dynamics and the policies with pixel features dynamics were trained for 10 and 25 iterations, respectively.
We use TRPO to optimize for the full space of parameters for each of the feature dynamics we consider in this work. We use a Gaussian policy, where the mean is the servoing policy of Equation (3) and the standard deviation is fixed to σexploration = 0.2 (i.e. we do not learn the standard deviation). Since the parameters are constrained to be non-negative, we parametrize the TRPO policies with √ w and √ λ. We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of 2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. The convolutional layers use 16 filters (4 × 4, stride 2) each, the first 2 fully-connected layers use 32 hidden units each, and all the layers except for the last one use ReLU activations. The input of the baseline network are the features (either pixel intensities or VGG features) corresponding to the feature dynamics being used. The parameters of the last iteration were used for the final policy. The policies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and a step size of 0.01.
We use our proposed FQI algorithm to optimize for the weights w,λ, and surpass the other methods in terms of performance on test executions, sample efficiency, and overall computation efficiency7. The updates of the inner iteration of our algorithm are computationally efficient; since the data is fixed for a given sampling iteration, we can precompute φ (st,ut) and certain terms of φ (st+1, ·). The parameters that achieved the best performance on 10 validation trajectories were used for the final policy. The policies are trained with FQI for S = 2 sampling iterations, a batch size of 10 trajectories per sampling iteration, K = 10 inner iterations per sampling iteration, and a regularization coefficient of ν = 0.1. We found that regularization of the parameters was important for the algorithm to converge. We show sample trajectories of the resulting policies in Table 3.
The FQI algorithm often achieved most of its performance gain after the first iteration. We ran additional sampling iterations of FQI to see if the policies improved further. For each iteration, we evaluated the performance of the policies on 10 validation trajectories. We did the same for the policies trained with TRPO, and we compare the learning curves of both methods in Figure 7.
7Our policy based on conv4 3 features takes around 650 s to run K = 10 iterations of FQI for a given batch size of 10 training trajectories.
C.4 LEARNING END-TO-END SERVOING POLICIES WITH TRPO
We use TRPO to train end-to-end servoing policies for various observation modalities and report the performance of the learned policies in Table 4. The policies are trained with the set of training cars, and tested on both this set and on the set of novel cars. The observation modalities that we consider are ground truth car positions (relative to the quadcopter), images of pixel intensities from the quadcopter’s camera, and VGG features extracted from those images. Unlike our method and the other experiments, no feature dynamics are explicitly learned for these experiments.
We use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convolutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussian baseline, which is parametrized just as the corresponding Gaussian policy (but no parameters are shared between the policy and the baseline). For the policy that takes in car positions, the mean is parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the other policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4×4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each.
The CNN policies would often not converge for several randomly initialized parameters. Thus, at the beginning of training, we tried multiple random seeds until we got a policy that achieved a relatively low cost on validation trajectories, and used the best initialization for training. The MLP policy did not have this problem, so we did not have to try multiple random initializations for it. All the policies are trained with a batch size of 4000 samples, 500 iterations, and a step size of 0.01. The parameters of the last iteration were used for the final policy.
C.5 CLASSICAL IMAGE-BASED VISUAL SERVOING
Traditional visual servoing techniques (Feddema & Mitchell, 1989; Weiss et al., 1987) use the image-plane coordinates of a set of points for control. For comparison to our method, we evaluate the servoing performance of feature points derived from bounding boxes and keypoints derived from hand-engineered features, and report the costs of test executions on Table 5.
We use bounding boxes from the C-COT tracker (Danelljan et al., 2016) (the current state-of-the-art visual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the box that tightly fits around the visible portions of the car. We provide the ground truth bounding box of the first frame to the C-COT tracker to indicate that we want to track the car. We use the four corners of the box as the feature points for servoing to take into account the position and scale of the car in image coordinates.
We provide the ground truth depth values of the feature points for the interaction matrices. In classical image-based visual servoing, the control law involves the interaction matrix (also known as feature Jacobian), which is the Jacobian of the points in image space with respect to the camera’s control (see Chaumette & Hutchinson (2006) for details). The analytical feature Jacobian used in IBVS assumes that the target points are static in the world frame. This is not true for a moving car, so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of the car. This amounts to adding a non-constant translation bias to the output of the dynamics function, where the translation is the displacement due to the car’s movement of the 3-dimensional point in the camera’s reference frame. Note that this is still not exactly equivalent to having the car being static since the roads have different slopes but the pitch and roll of the quadcopter is constrained to be fixed.
For the hand-crafted features, we consider SIFT (Lowe, 2004), SURF (Bay et al., 2006) and ORB (Rublee et al., 2011) keypoints. We filter out the keypoints of the first frame that does not belong to the car and use these as the target keypoints. However, we use all the keypoints for the subsequent observations.
The servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth car dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the other methods. This is, in part, because the feature extraction and matching process introduces compounding errors. Similar results were found by Collewet & Marchand (2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities) and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURF features.
C.6 CLASSICAL POSITION-BASED VISUAL SERVOING
Position-based visual servoing (PBVS) techniques use poses of a target object for control (see Chaumette & Hutchinson (2006) for details). We evaluate the servoing performance of a few variants, and report the costs of test executions on Table 6.
Similar to our IBVS experiments, we consider a variant that uses the car pose of the next time step as a way to incorporate the ground truth car dynamics into the interaction matrix. Since the cost function is invariant to the orientation of the car, we also consider a variant where the policy only minimizes the translational part of the pose error.
These servoing policies, which use ground truth car poses, outperforms all the other policies based on images. In addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used. | 1. What are the main contributions of the paper regarding visual servoing?
2. What are the strengths and weaknesses of the proposed approach in connecting action and frame representation?
3. How effective is the method for optimizing the Bellman error, and how does it compare to other approaches?
4. Are there any concerns or suggestions regarding the experimental results and validation?
5. How significant is the modification of the VGG in the proposed approach, and what are the implications? | Review | Review
The paper proposes a novel approach for learning visual servoing based on Q-iteration. The main contributions of the paper are:
1. Bilinear dynamics model for predicting next frame (features) based on action and current frame
2. Formulation of servoing with a Q-function that learns weights for different feature channels
3. An elegant method for optimizing the Bellman error to learn the Q-function
Pros:
+ The paper does a good job of exploring different ways to connect the action (u_t) and frame representation (y_t) to predict next frame features (y_{t+1}). They argue in favour of a locally connected bilinear model which strikes the balance between computation and expressive ability.
Cons:
- While, sec. 4 makes good arguments for different choices, I would have liked to see more experimental results comparing the 3 approaches: fully connected, convolutional and locally connected dynamics.
Pros:
+ The idea of weighting different channels to capture the importance of obejcts in different channels seems more effective than treating errors across all channels equally. This is also validated experimentally, where unweighted performance suffers consistently.
+ Solving the Bellman error is a difficult problem in Q-learning approaches. The current paper presents a solid optimization scheme based on the key-observation that scaling Q-function parameters does not affect the best policy chosen. This enables a more elegant FQI approach as opposed to typical optimization schemes which (c_t + \gamma min_u Q_{t+1}) fixed.
Cons:
- However, I would have liked to see the difference between FQI and such an iterative approach which holds the second term in Eq. 5 fixed.
Experimental results:
- Overall, I find the experimental results unsatisfying given the small scale and toy simulations. However, the lack of benchmarks in this domain needs to be recognized.
- Also, as pointed out in pre-review section, the idea of modifying the VGG needs to be experimentally validated. In its current form, it is not clear whether the modified VGG would perform better than the original version.
Overall, the contribution of the paper is solid in terms of technical novelty and problem formulations. However, the paper could use stronger experiments as suggested to earlier to bolster its claims. |
ICLR | Title
Learning Visual Servoing with Deep Features and Fitted Q-Iteration
Abstract
Visual servoing involves choosing actions that move a robot in response to observations from a camera, in order to reach a goal configuration in the world. Standard visual servoing approaches typically rely on manually designed features and analytical dynamics models, which limits their generalization capability and often requires extensive application-specific feature and model engineering. In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of data of the target in question, to enable quick adaptation to new targets. Our approach is based on servoing the camera in the space of learned visual features, rather than image pixels or manually-designed keypoints. We demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. A key component of our approach is to use a sample-efficient fitted Q-iteration algorithm to learn which features are best suited for the task at hand. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms. Videos are available at http://rll.berkeley.edu/visual_servoing.
1 INTRODUCTION
Visual servoing is a classic problem in robotics that requires moving a camera or robot to match a target configuration of visual features or image intensities. Many robot control tasks that combine perception and action can be posed as visual servoing, including navigation (DeSouza & Kak, 2002; Chen et al., 2006), where a robot must follow a desired path; manipulation, where the robot must servo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al., 1999; Corke, 1993; Hashimoto, 1993; Hosoda & Asada, 1994; Kragic & Christensen, 2002); and various other problems, as surveyed in Hutchinson et al. (1996). Most visual servoing methods assume access to good geometric image features (Chaumette & Hutchinson, 2006; Collewet et al., 2008; Caron et al., 2013) and require knowledge of their dynamics, which are typically obtained from domain knowledge about the system. Using such hand-designed features and models prevents exploitation of statistical regularities in the world, and requires manual engineering for each new system.
In this work, we study how learned visual features, learned predictive dynamics models, and reinforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of
data of the target in question, so as to be easy and quick to adapt to new targets. Successful target following requires the visual servo to tolerate moderate variation in the appearance of the target, including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all such distractors typically requires a considerable amount of data. However, since a visual servo is typically specific to a particular task, it is desirable to be able to learn the servoing mechanism very quickly, using a minimum amount of data. Prior work has shown that the features learned by large convolutional neural networks on large image datasets, such as ImageNet classification (Deng et al., 2009), tend to be useful for a wide range of other visual tasks (Donahue et al., 2014). We explore whether the usefulness of such features extends to visual servoing.
To answer this question, we propose a visual servoing method that uses pre-trained features, in our case obtained from the VGG network (Simonyan & Zisserman, 2014) trained for ImageNet classification. Besides the visual features, our method uses an estimate of the feature dynamics in visual space by means of a bilinear model. This allows the visual servo to predict how motion of the robot’s camera will affect the perceived feature values. Unfortunately, servoing directly on the high-dimensional features of a pre-trained network is insufficient by itself to impart robustness on the servo: the visual servo must not only be robust to moderate visual variation, but it must also be able to pick out the target of interest (such as a car that the robot is tasked with following) from irrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedure that automatically chooses weights for the most relevant visual features. Crucially, the actual servoing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclidean distance between the weighted feature values at the next time step and the target. The form of the servoing policy in our approach leads to an analytic and tractable linear approximator for the Qfunction, which leads to a computationally efficient fitted Q-iteration algorithm. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
The environment for the synthetic car following benchmark is available online as the package CitySim3D1, and the code to reproduce our method and experiments is also available online2. Supplementary videos of all the test executions are available on the project’s website3.
2 RELATED WORK
Visual servoing is typically (but not always) performed with calibrated cameras and carefully designed visual features. Ideal features for servoing should be stable and discriminative, and much of the work on visual servoing focuses on designing stable and convergent controllers under the assumption that such features are available (Espiau et al., 2002; Mohta et al., 2014; Wilson et al., 1996). Some visual servoing methods do not require camera calibration (Jagersand et al., 1997; Yoshimi & Allen, 1994), and some recent methods operate directly on image intensities (Caron et al., 2013), but generally do not use learning to exploit statistical regularities in the world and improve robustness to distractors.
Learning is a relatively recent addition to the repertoire of visual servoing tools. Several methods have been proposed that apply ideas from reinforcement learning to directly acquire visual servoing controllers (Lampe & Riedmiller, 2013; Sadeghzadeh et al., 2015). However, such methods have not been demonstrated under extensive visual variation, and do not make use of state-of-the-art convolutional neural network visual features. Though more standard deep reinforcement learning methods (Lange et al., 2012; Mnih et al., 2013; Levine et al., 2016; Lillicrap et al., 2015) could in principle be applied to directly learn visual servoing policies, such methods tend to require large numbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visual servoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object).
1https://github.com/alexlee-gk/citysim3d 2https://github.com/alexlee-gk/visual_dynamics 3http://rll.berkeley.edu/visual_servoing
Instead, we propose an approach that combines learning of predictive models with pre-trained visual features. We use visual features trained for ImageNet (Deng et al., 2009) classification, though any pre-trained features could in principle be applicable for our method, so long as they provide a suitable degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint. Using pre-trained features allows us to avoid the need for large amounts of experience, but we must still learn the policy itself. To further accelerate this process, we first acquire a predictive model that allows the visual servo to determine how the visual features will change in response to an action. General video prediction is an active research area, with a number of complex but data-hungry models proposed in recent years (Oh et al., 2015; Watter et al., 2015; Mathieu et al., 2015; Xue et al., 2016; Lotter et al., 2016; Jia et al., 2016; Walker et al., 2016; Vondrick et al., 2016).
However, we observe that convolutional response maps can be interpreted as images and, under mild assumptions, the dynamics of image pixels during camera motion can be well approximated by means of a bilinear model (Censi & Murray, 2015). We therefore train a relatively simple bilinear model for short-term prediction of visual feature dynamics, which we can use inside a very simple visual servo that seeks to minimize the error between the next predicted feature values and a target image.
Unfortunately, simply training predictive models on top of pre-trained features is insufficient to produce an effective visual servo, since it weights the errors of distractor objects the same amount as the object of interest. We address this challenge by using an efficient Q-iteration algorithm to train the weights on the features to maximize the servo’s long-horizon reward. This method draws on ideas from regularized fitted Q-iteration (Gordon, 1995; Ernst et al., 2005; Farahmand et al., 2009) and neural fitted Q-iteration (Riedmiller, 2005) to develop a sample-efficient algorithm that can directly estimate the expected return of the visual servo without the use of any additional function approximator.
3 PROBLEM STATEMENT
Let yt be a featurization of the camera’s observations xt and let y∗ be some given goal feature map. For the purposes of this work, we define visual servoing as the problem of choosing controls ut for a fixed number of discrete time steps t as to minimize the error ‖y∗ − yt‖. We use a relatively simple gradient-based servoing policy that uses one-step feature dynamics, f : {yt,ut} → yt+1. The policy chooses the control that minimizes the distance between the goal feature map and the one-step prediction:
π(xt,x∗) = argmin u ‖y∗ − f(yt,u)‖2 . (1)
Learning this policy amounts to learning the robot dynamics and the distance metric ‖·‖. To learn the robot dynamics, we assume that we have access to a dataset of paired observations and controls xt,ut,xt+1. This data is relatively easy to obtain as it involves collecting a stream of the robot’s observations and controls. We use this dataset to learn a general visual dynamics model that can be used for any task.
To learn the distance metric, we assume that the robot interacts with the world and collects tuples of the form xt,ut, ct,xt+1,x∗. At every time step during learning, the robot observes xt and takes action ut. After the transition, the robot observes xt+1 and receives an immediate cost ct. This cost is task-specific and it quantifies how good that transition was in order to achieve the goal. At the beginning of each trajectory, the robot is given a goal observation x∗, and it is the same throughout the trajectory. We define the goal feature map to be the featurization of the goal observation. We learn the distance metric using reinforcement learning and we model the environment as a Markov Decision Process (MDP). The state of the MDP is the tuple of the current observation and the episode’s target observation, st = (xt,x∗), the action ut is the discrete-time continuous control of the robot, and the cost function maps the states and action (st,ut, st+1) to a scalar cost ct.
4 VISUAL FEATURES DYNAMICS
We learn a multiscale bilinear model to predict the visual features of the next frame given the current image from the robot’s camera and the action of the robot. An overview of the model is shown in Figure 1. The learned dynamics can then be used for visual servoing as described in Section 5.
4.1 VISUAL FEATURES
We consider both pixels and semantic features for the visual representation. We define the function h to relate the image x and its feature y = h (x). Our choice of semantic features are derived from the VGG-16 network (Simonyan & Zisserman, 2014), which is a convolutional neural network trained for large-scale image recognition on the ImageNet dataset (Deng et al., 2009). Since spatial invariance is undesirable for servoing, we remove some of the max-pooling layers and replace the convolutions that followed them with dilated convolutions, as done by Yu & Koltun (2015). The modified VGG network is shown in Figure 2. We use the model weights of the original VGG-16 network, which are publicly available as a Caffe model (Jia et al., 2014). The features that we use are the outputs of some of the intermediate convolutional layers, that have been downsampled to a 32× 32 resolution (if necessary) and standarized with respect to our training set. We use multiple resolutions of these features for servoing. The idea is that the high-resolution representations have detailed local information about the scene, while the low-resolution representations have more global information available through the image-space gradients. The features at level l of the multiscale pyramid are denoted as y(l). The features at each level are obtained from the features below through a downsampling operator d(y(l−1)) = y(l) that cuts the resolution in half.
4.2 BILINEAR DYNAMICS
The features y(l)t are used to predict the corresponding level’s features y (l) t+1 at the next time step, conditioned on the action ut, according to a prediction function f (l)(y (l) t ,ut) = ŷ (l) t+1. We use a bilinear model to represent these dynamics, motivated by prior work (Censi & Murray, 2015). In order to servo at different scales, we learn a bilinear dynamics model at each scale. We consider two variants of the bilinear model in previous work in order to reduce the number of model parameters.
The first variant uses fully connected dynamics as in previous work but models the dynamics of each channel independently. When semantic features are used, this model interprets the feature maps as
being abstract images with spatial information within a channel and different entities or factors of variation across different channels. This could potentially allow the model to handle moving objects, occlusions, and other complex phenomena.
The fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforces sparsity in the parameters. In particular, we constrain the prediction to depend only on the features that are in its local spatial neighborhood, leading to the following locally connected bilinear model:
ŷ (l) t+1,c = y (l) t,c + ∑ j ( W (l) c,j ∗ y (l) t,c +B (l) c,j ) ut,j + ( W (l) c,0 ∗ y (l) t,c +B (l) c,0 ) . (2)
The parameters are the 4-dimensional tensor W (l)c,j and the matrix B (l) c,j for each channel c, scale l, and control coordinate j. The last two terms are biases that allow to model action-independent visual changes, such as moving objects. The ∗ is the locally connected operator, which is like a convolution but with untied filter weights4.
4.3 TRAINING VISUAL FEATURE DYNAMICS MODELS
The loss that we use for training the bilinear dynamics is the sum of the losses of the predicted features at each level, ∑L l=0 `
(l), where the loss for each level l is the squared `-2 norm between the predicted features and the actual features of that level, `(l) = ‖y(l)t+1 − ŷ (l) t+1‖2.
We optimize for the dynamics while keeping the feature representation fixed. This is a supervised learning problem, which we solve with ADAM (Kingma & Ba, 2014). The training set, consisting of triplets xt,ut,xt+1, was obtained by executing a hand-coded policy that moves the robot around the target with some Gaussian noise.
5 LEARNING VISUAL SERVOING WITH REINFORCEMENT LEARNING
We propose to use a multiscale representation of semantic features for servoing. The challenge when introducing multiple scales and multi-channel feature maps for servoing is that the features do not necessarily agree on the optimal action when the goal is unattainable or the robot is far away from the goal. To do well, it’s important to use a good weighing of each of the terms in the objective. Since there are many weights, it would be impractically time-consuming to set them by hand, so we resort to learning. We want the weighted one-step lookahead objective to encourage good longterm behavior, so we want this objective to correspond to the state-action value function Q. So we propose a method for learning the weights based on fitted Q-iteration.
5.1 SERVOING WITH WEIGHTED MULTISCALE FEATURES
Instead of attempting to build an accurate predictive model for multi-step planning, we use the simple greedy servoing method in Equation (1), where we minimize the error between the target and predicted features for all the scales. Typically, only a few objects in the scene are relevant, so the errors of some channels should be penalized more than others. Similarly, features at different scales might need to be weighted differently. Thus, we use a weighting w(l)c ≥ 0 per channel c and scale l:
π(xt,x∗) = argmin u ∑ c L∑ l=0 w (l) c |y(l)·,c | ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 + ∑ j λju 2 j , (3)
where | · | denotes the cardinality operator and the constant 1/|y(l)·,c| normalizes the feature errors by its spatial resolution. We also use a separate weight λj for each control coordinate j. This optimization can be solved efficiently since the dynamics is linear in the controls (see Appendix A).
4 The locally connected operator, with a local neighborhood of nf × nf (analogous to the filter size in convolutions), is defined as:
(W ∗ y)kh,kw = kh+bnf/2c∑
ih=kh−bnf/2c kw+bnf/2c∑ iw=kw−bnf/2c Wkh,kw,ih−kh,iw−kwyih,iw .
5.2 Q-FUNCTION APPROXIMATION FOR THE WEIGHTED SERVOING POLICY
We choose a Q-value function approximator that can represent the servoing objective such that the greedy policy with respect to the Q-values results in the policy of Equation (3). In particular, we use a function approximator that is linear in the weight parameters θ> = [ w> λ> ] :
Qθ,b(st,u) = φ(st,u) >θ + b, φ (st,u) > =
[[ 1
|y(l)·,c| ∥∥∥y(l)∗,c − f (l)c (y(l)t,c,u)∥∥∥2 2 ]> c,l [ u2j ]> j ] .
We denote the state of the MDP as st = (xt,x∗) and add a bias b to the Q-function. The servoing policy is then simply πθ(st) = argminu Qθ,b(st,u). For reinforcement learning, we optimized for the weights θ but kept the feature representation and its dynamics fixed.
5.3 LEARNING THE Q-FUNCTION WITH FITTED Q-ITERATION
Reinforcement learning methods that learn a Q-function do so by minimizing the Bellman error:∥∥∥∥Q (st,ut)− (ct + γminu Q (st+1,u) )∥∥∥∥2
2
. (4)
In fitted Q-iteration, the agent iteratively gathers a dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni of N samples according to an exploration policy, and then minimizes the Bellman error using this dataset. We use the term sampling iteration to refer to each iteration j of this procedure. At the beginning of each sampling iteration, the current policy with added Gaussian noise is used as the exploration policy.
It is typically hard or unstable to optimize for both Q-functions that appear in the Bellman error of Equation (4), so it is usually optimized by iteratively optimizing the current Q-function while keeping the target Q-function constant. However, we notice that for a given state, the action that minimizes its Q-values is the same for any non-negative scaling α of θ and for any bias b. Thus, to speed up the optimization of the Q-function, we first set α(k− 1 2 ) and b(k− 1 2 ) by jointly solving for α and b of both the current and target Q-function:
min α≥0,b
1
N N∑ i=1 ∥∥∥∥Qαθ(k−1),b (s(i)t ,u(i)t )− (c(i)t + γminu Qαθ(k−1),b (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (5)
This is similar to how, in policy evaluation, state values can be computed by solving a linear system. We regularize the parameters with an `-2 penalty, weighted by ν ≥ 0. We use the term FQI iteration to refer to each iteration k of optimizing the Bellman error, and we use the notation (k− 12 ) to denote an intermediate step between iterations (k−1) and (k). The parameters θ can then be updated with θ(k− 1 2 ) = α(k− 1 2 )θ(k−1). Then, we update θ(k) and b(k) by optimizing for θ and b of the current Q-function while keeping the parameters of the target Q-function fixed:
min θ≥0,b
1
N N∑ i=1 ∥∥∥∥Qθ,b (s(i)t ,u(i)t )− (c(i)t + γminu Qθ(k− 12 ),b(k− 12 ) (s(i)t+1,u) )∥∥∥∥2 2 + ν ‖θ‖22 . (6)
A summary of the algorithm used to learn the feature weights is shown in Algorithm 1.
Algorithm 1 FQI with initialization of policy-independent parameters
1: procedure FQI(θ(0), σ2exploration, ν) 2: for s = 1, . . . , S do . sampling iterations 3: Gather dataset {s(i)t ,u(i)t , c(i)t , s(i)t+1}Ni using exploration policy N (πθ(0) , σ2exploration) 4: for k = 1, . . . ,K do . FQI iterations 5: Fit α(k− 1 2 ) and b(k− 1 2 ) using (5) 6: θ(k− 1 2 ) ← α(k− 12 )θ(k−1) 7: Fit θ(k) and b(k) using (6) 8: θ(0) ← θ(K)
6 EXPERIMENTS
We evaluate the performance of the model for visual servoing in a simulated environment. The simulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom, corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks in which an autonomous quadcopter flies above a city, with the goal of following some target object (e.g., a car).
6.1 LEARNING FEATURE DYNAMICS AND WEIGHTS WITH FQI
The dynamics for each of the features were trained using a dataset of 10000 samples (corresponding to 100 trajectories) with ADAM (Kingma & Ba, 2014). A single dynamics model was learned for each feature representation for all the training cars (Figure 3). This training set was generated by executing a hand-coded policy that navigates the quadcopter around a car for 100 time steps per trajectory, while the car moves around the city.
We used the proposed FQI algorithm to learn the weightings of the features and control regularizer. At every sampling iteration, the current policy was executed with Gaussian noise to gather data from 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. The immediate cost received by the agent encodes the error of the target in image coordinates (details in Appendix B). Then, the parameters were iteratively updated by running K = 10 iterations of FQI. We ran the overall algorithm for only S = 2 sampling iterations and chose the parameters that achieved the best performance on 10 validation trajectories. These validation trajectories were obtained by randomly choosing 10 cars from the set of training cars and randomly sampling initial states, and executing the policy with the parameters of the current iteration. All the experiments share the same set of validation trajectories.
Feature Dynamics Observations from Test Executions Cost
6.2 COMPARISON OF FEATURE REPRESENTATIONS FOR SERVOING
We compare the servoing performance for various feature dynamics models, where the weights are optimized with FQI. We execute the learned policies on 100 test trajectories and report the average cost of the trajectory rollouts on Figure 5. The cost of a single trajectory is the (undiscounted) sum of costs ct. We test the policies with cars that were seen during training as well as with a set of novel cars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies.
The test trajectories were obtained by randomly sampling 100 cars (with replacement) from one of the two sets of cars, and randomly sampling initial states (which are different from the ones used for validation). For consistency and reproducibility, the same sampled cars and initial states were used across all the test experiments, and the same initial states were used for both sets of cars. These test trajectories were never used during the development of the algorithm or for choosing hyperparameters.
From these results, we notice that policies based on deeper VGG features, up to VGG conv4 3, generally achieve better performance. However, the deepest feature representation, VGG conv5 3, is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatially invariant and it might lack the necessary spatial information to differentiate among different car positions. The policies based on pixel intensities and VGG conv5 3 features perform worse on the novel cars. However, VGG features conv1 2 through conv4 3 achieve some degree of generalization on the novel cars.
We show sample trajectories in Table 1. The policy based on pixel-intensities is susceptible to occlusions and distractor objects that appear in the target image or during executions. This is because distinguishing these occlusions and distractors from the cars cannot be done using just RGB features.
6.3 COMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODS
We compare our policy using conv4 3 feature dynamics, with weights optimized by FQI, against policies that use these dynamics but with either no feature weighting or weights optimized by other algorithms.
For the case of no weighting, we use a single feature weight w but optimize the relative weighting of the controls λ with the cross entropy method (CEM) (De Boer et al., 2005). For the other cases, we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al., 2015). Since the servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the policy as a neural network that has a matrix inverse operation at the output. We train this network for 2 and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these methods use the same feature representation as ours, the only difference being how the weights w and λ are chosen.
We report the average costs of these methods on the right of Figure 6. In 2 sampling iterations, the policy learned with TRPO does not improve by much, whereas our policy learned with FQI significantly outperforms the other policies. The policy learned with TRPO improves further in 50 iterations; however, the cost incurred by this policy is still about one and a half times the cost of our policy, despite using more than 100 times as many trajectories.
6.4 COMPARISON TO PRIOR METHODS
We also consider other methods that do not use the dynamics-based servoing policy that we propose. We report their average performance on the left of Figure 6.
For one of the prior methods, we train a convolutional neural network (CNN) policy end-to-end with TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fullyconnected layers, with ReLU activations except for the output layer; the convolutional layers use
16 filters (4 × 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. The policy takes in raw pixel-intensities and outputs controls.
This policy achieves a modest performance (although still worse than the policies based on conv4 3 feature dynamics) but it requires significantly more training samples than any of the other learningbased methods. We also trained CNN policies that take in extracted VGG features (without any dynamics) as inputs, but they perform worse (see Table 4 in the Appendix). This suggests that given a policy parametrization that is expressive enough and given a large number of training samples, it is better to directly provide the raw pixel-intensity images to the policy instead of extracted VGG features. This is because VGG features are not optimized for this task and their representation loses some information that is useful for servoing.
The other two prior methods use classical image-based visual servoing (IBVS) (Chaumette & Hutchinson, 2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rublee et al., 2011), or feature points extracted from a visual tracker. For the former, the target features consist of only the ORB feature points that belong to the car, and this specifies that the car is relevant for the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker (C-COT) (Danelljan et al., 2016) (the current state-of-the-art visual tracker) to get bounding boxes around the car and use the four corners of the box as the feature points for servoing. We provide the ground truth car’s bounding box of the first frame as an input to the C-COT tracker. For all of the IBVS methods, we provide the ground truth depth values of the feature points, which are used in the algorithm’s interaction matrix5.
The first method performs poorly, in part because ORB features are not discriminative enough for some of the cars, and the target feature points are sometimes matched to feature points that are not on the car. The tracker-based method achieves a relatively good performance. The gap in performance with respect to our method is in part due to the lack of car dynamics information in the IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics. It is also worth noting that the tracker-based policy runs significantly slower than our method. The open-source implementation of the C-COT tracker6 runs at about 1Hz whereas our policy based on conv4 3 features runs at about 16Hz. Most of the computation time of our method is spent computing features from the VGG network, so there is room for speedups if we use a network that is less computationally demanding.
7 DISCUSSION
Manual design of visual features and dynamics models can limit the applicability of visual servoing approaches. We described an approach that combines learned visual features with learning predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Our experiments demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlusions. For control we propose to learn Q-values, building on fitted Q-iteration, which at execution time allows for one-step lookahead calculations that optimize long term objectives. Our method can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.
ACKNOWLEDGEMENTS
This research was funded in part by the Army Research Office through the MAST program and the Berkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP.
5The term interaction matrix, or feature Jacobian, is used in the visual servo literature to denote the Jacobian of the features with respect to the control.
6https://github.com/martin-danelljan/Continuous-ConvOp
A LINEARIZATION OF THE BILINEAR DYNAMICS
The optimization of Equation (3) can be solved efficiently by using a linearization of the dynamics,
f (l)c ( y (l) t,c,u ) = f (l)c ( y (l) t,c, ū ) + J (l) t,c (u − ū) = f (l)c ( y (l) t,c,0 ) + J (l) t,cu, (7)
where J (l)t,c is the Jacobian matrix with partial derivatives ∂f(l)c ∂u (y (l) t,c, ū) and ū is the linearization point. Since the bilinear dynamics are linear with respect to the controls, this linearization is exact and the Jacobian matrix does not depend on ū. Without loss of generality, we set ū = 0.
Furthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simply doing a forward pass through the model. For the locally bilinear dynamics of Equation (2), the j-th column of the Jacobian matrix is given by
J (l) t,c,j =
∂f (l) c
∂uj (y
(l) t,c,0) =W (l) c,j ∗ y (l) t,c +B (l) c,j . (8)
B SERVOING COST FUNCTION FOR REINFORCEMENT LEARNING
The goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards, or equivalently, a policy that minimizes the expected sum of costs. The cost should be one that quantifies progress towards the goal. We define the cost function in terms of the position of the target object (in the camera’s local frame) after the action has been taken,
c(st,ut, st+1) = √( pxt+1 pzt+1 )2 + ( pyt+1 pzt+1 )2 + ( 1 pzt+1 − 1pz∗ )2 , if ||pt+1||2 ≥ τ and car in FOV
(T − t+ 1) c(·, ·, st), otherwise, (9)
where T is the maximum trajectory length. The episode terminates early if the camera is too close to the car (less than a distance τ ) or the car’s origin is outside the camera’s field of view (FOV). The car’s position at time t is pt = (p x t ,p y t ,p z t ) and the car’s target position is p∗ = (0, 0,p z ∗), both in the camera’s local frame (z-direction is forward). Our experiments use T = 100 and τ = 4m.
C EXPERIMENT DETAILS
C.1 TASK SETUP
The camera is attached to the vehicle slightly in front of the robot’s origin and facing down at an angle of π/6 rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom, corresponding to translation and yaw angle. Pitch and roll are held fixed.
In our simulations, the quadcopter follows a car that drives at 1m s−1 along city roads during training and testing. The quadcopter’s speed is limited to within 10m s−1 for each translational degree of freedom, and its angular speed is limited to within π/2 rad s−1. The simulator runs at 10Hz. For each trajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads. The quadcopter is initialized right behind the car, in the desired relative position for following. The image observed at the beginning of the trajectory is used as the goal observation.
C.2 LEARNING FEATURE DYNAMICS
The dynamics of all the features were trained using a dataset of 10000 triplets xt,ut,xt+1. The observations are 128× 128 RGB images and the actions are 4-dimensional vectors of real numbers encoding the linear and angular (yaw) velocities. The actions are normalized to between −1 and 1. The training set was generated from 100 trajectories of a quadcopter following a car around the city with some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown during learning. The generation process of each trajectory is as follows: First, a car is chosen at random from the set of available cars and it is randomly placed on one of the roads. Then, the quadcopter is placed at some random position relative to the car’s horizontal pose, which is the car’s pose that has been rotated so that the vertical axis of it and the world matches. This quadcopter position is uniformly sampled in cylindrical coordinates relative to the car’s horizontal pose, with heights in the interval 12m to 18m, and azimuthal angles in the interval −π/2 rad to π/2 rad (where the origin of the azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the car is in the middle of the image. At every time step, the robot takes an action that moves it towards a target pose, with some additive Gaussian noise (σ = 0.2). The target pose is sampled according to the same procedure as the initial pose, and it is sampled once at the beginning of each trajectory.
We try the fully and locally connected dynamics for pixel intensities to better understand the performance trade-offs when assuming locally connected dynamics. We do not use the latter for the semantic features since they are too high-dimensional for the dynamics model to fit in memory. The dynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learning rate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.
C.3 LEARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNING
We use CEM, TRPO and FQI to learn the feature weighting and report the performance of the learned policies in Table 2. We use the cost function described in Appendix B, a discount factor of γ = 0.9, and trajectories of up to 100 steps. All the algorithms used initial weights of w = 1 and λ = 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standard deviation σexploration = 0.2.
For the case of unweighted features, we use CEM to optimize for a single weight w and for the weights λ. For the case of weighted features, we use CEM to optimize for the full space of parameters, but we only do that for the pixel feature dynamics since CEM does not scale for highdimensional problems, which is the case for all the VGG features. Each iteration of CEM performs a certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisy evaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua-
Feature Dynamics Observations from Test Executions Cost
tion used the average sum of costs of 10 trajectory rollouts as its evaluation metric. The parameters of the last iteration were used for the final policy. The policies with unweighted features dynamics and the policies with pixel features dynamics were trained for 10 and 25 iterations, respectively.
We use TRPO to optimize for the full space of parameters for each of the feature dynamics we consider in this work. We use a Gaussian policy, where the mean is the servoing policy of Equation (3) and the standard deviation is fixed to σexploration = 0.2 (i.e. we do not learn the standard deviation). Since the parameters are constrained to be non-negative, we parametrize the TRPO policies with √ w and √ λ. We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of 2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. The convolutional layers use 16 filters (4 × 4, stride 2) each, the first 2 fully-connected layers use 32 hidden units each, and all the layers except for the last one use ReLU activations. The input of the baseline network are the features (either pixel intensities or VGG features) corresponding to the feature dynamics being used. The parameters of the last iteration were used for the final policy. The policies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and a step size of 0.01.
We use our proposed FQI algorithm to optimize for the weights w,λ, and surpass the other methods in terms of performance on test executions, sample efficiency, and overall computation efficiency7. The updates of the inner iteration of our algorithm are computationally efficient; since the data is fixed for a given sampling iteration, we can precompute φ (st,ut) and certain terms of φ (st+1, ·). The parameters that achieved the best performance on 10 validation trajectories were used for the final policy. The policies are trained with FQI for S = 2 sampling iterations, a batch size of 10 trajectories per sampling iteration, K = 10 inner iterations per sampling iteration, and a regularization coefficient of ν = 0.1. We found that regularization of the parameters was important for the algorithm to converge. We show sample trajectories of the resulting policies in Table 3.
The FQI algorithm often achieved most of its performance gain after the first iteration. We ran additional sampling iterations of FQI to see if the policies improved further. For each iteration, we evaluated the performance of the policies on 10 validation trajectories. We did the same for the policies trained with TRPO, and we compare the learning curves of both methods in Figure 7.
7Our policy based on conv4 3 features takes around 650 s to run K = 10 iterations of FQI for a given batch size of 10 training trajectories.
C.4 LEARNING END-TO-END SERVOING POLICIES WITH TRPO
We use TRPO to train end-to-end servoing policies for various observation modalities and report the performance of the learned policies in Table 4. The policies are trained with the set of training cars, and tested on both this set and on the set of novel cars. The observation modalities that we consider are ground truth car positions (relative to the quadcopter), images of pixel intensities from the quadcopter’s camera, and VGG features extracted from those images. Unlike our method and the other experiments, no feature dynamics are explicitly learned for these experiments.
We use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convolutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussian baseline, which is parametrized just as the corresponding Gaussian policy (but no parameters are shared between the policy and the baseline). For the policy that takes in car positions, the mean is parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the other policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4×4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each.
The CNN policies would often not converge for several randomly initialized parameters. Thus, at the beginning of training, we tried multiple random seeds until we got a policy that achieved a relatively low cost on validation trajectories, and used the best initialization for training. The MLP policy did not have this problem, so we did not have to try multiple random initializations for it. All the policies are trained with a batch size of 4000 samples, 500 iterations, and a step size of 0.01. The parameters of the last iteration were used for the final policy.
C.5 CLASSICAL IMAGE-BASED VISUAL SERVOING
Traditional visual servoing techniques (Feddema & Mitchell, 1989; Weiss et al., 1987) use the image-plane coordinates of a set of points for control. For comparison to our method, we evaluate the servoing performance of feature points derived from bounding boxes and keypoints derived from hand-engineered features, and report the costs of test executions on Table 5.
We use bounding boxes from the C-COT tracker (Danelljan et al., 2016) (the current state-of-the-art visual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the box that tightly fits around the visible portions of the car. We provide the ground truth bounding box of the first frame to the C-COT tracker to indicate that we want to track the car. We use the four corners of the box as the feature points for servoing to take into account the position and scale of the car in image coordinates.
We provide the ground truth depth values of the feature points for the interaction matrices. In classical image-based visual servoing, the control law involves the interaction matrix (also known as feature Jacobian), which is the Jacobian of the points in image space with respect to the camera’s control (see Chaumette & Hutchinson (2006) for details). The analytical feature Jacobian used in IBVS assumes that the target points are static in the world frame. This is not true for a moving car, so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of the car. This amounts to adding a non-constant translation bias to the output of the dynamics function, where the translation is the displacement due to the car’s movement of the 3-dimensional point in the camera’s reference frame. Note that this is still not exactly equivalent to having the car being static since the roads have different slopes but the pitch and roll of the quadcopter is constrained to be fixed.
For the hand-crafted features, we consider SIFT (Lowe, 2004), SURF (Bay et al., 2006) and ORB (Rublee et al., 2011) keypoints. We filter out the keypoints of the first frame that does not belong to the car and use these as the target keypoints. However, we use all the keypoints for the subsequent observations.
The servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth car dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the other methods. This is, in part, because the feature extraction and matching process introduces compounding errors. Similar results were found by Collewet & Marchand (2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities) and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURF features.
C.6 CLASSICAL POSITION-BASED VISUAL SERVOING
Position-based visual servoing (PBVS) techniques use poses of a target object for control (see Chaumette & Hutchinson (2006) for details). We evaluate the servoing performance of a few variants, and report the costs of test executions on Table 6.
Similar to our IBVS experiments, we consider a variant that uses the car pose of the next time step as a way to incorporate the ground truth car dynamics into the interaction matrix. Since the cost function is invariant to the orientation of the car, we also consider a variant where the policy only minimizes the translational part of the pose error.
These servoing policies, which use ground truth car poses, outperforms all the other policies based on images. In addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used. | 1. What is the main contribution of the paper regarding visual servoing?
2. What are the strengths of the proposed approach, particularly in terms of controlled experiments and performance benefits?
3. Do you have any suggestions for improving the paper, such as using more complex benchmarks, end-to-end training, representation learning, and reproducibility?
4. Are there any typos or minor issues in the paper that could be corrected? | Review | Review
1) Summary
This paper proposes to tackle visual servoing (specifically target following) using spatial feature maps from convolutional networks pre-trained on general image classification tasks. The authors combine bilinear models of one-step dynamics of visual feature maps at multiple scales with a reinforcement learning algorithm to learn a servoing policy. This policy is learned by minimizing a regularized weighted average of distances to features predicted by the aforementioned model of visual dynamics.
2) Contributions
+ Controlled experiments in simulation quantifying the usefulness of pre-trained deep features for visual servoing.
+ Clear performance benefits with respect to many sensible baselines, including ones using ground truth bounding boxes.
+ Principled learning of multi-scale visual feature weights with an efficient trust-region fitted Q-iteration algorithm to handle the problem of distractors.
+ Good sample efficiency thanks to the choice of Q-function approximator and the model-based one-step visual feature dynamics.
+ Open source virtual city environment to benchmark visual servoing.
3) Suggestions for improvement
- More complex benchmark:
Although the environment is not just a toy synthetic one, the experiments would benefit greatly from more complex visual conditions (clutter, distractors, appearance and motion variety, environment richness and diversity, etc). At least, the realism and diversity of object appearances could be vastly improved by using a larger number of 3D car models, including more realistic and diverse ones that can be obtained from Google SketchUp for instance, and populating the environment with more distractor cars (in traffic or parked). This is important as the main desired quality of the approach is robustness to visual variations.
- End-to-end and representation learning:
Although the improvements are already significant in the current synthetic experiments, it would be interesting to measure the impact of end-to-end training (i.e. also fine-tuning the convnet), as it is possibly needed for better generalization in more challenging visual conditions. It would also allow to measure the benefit of deep representation learning for visual servoing, which would be relevant to ICLR (there is no representation learning so far, although the method can be straightforwardly adapted as the authors mention briefly).
- Reproducibility:
The formalism and algorithms are clearly explained, but there is a slightly overwhelming mass of practical tricks and implementation details described with varying levels of details throughout the paper and appendix. Grouping, simplifying, or reorganizing the exposition of these implementation details would help, but a better way would probably consist in only summarizing the most important ones in section and link to an open source implementation of the method for completeness.
- Typos:
p.2: "learning is a relative[ly] recent addition"
p.2: "be applied [to] directly learn"
4) Conclusion
In spite of the aforementioned limits of the experiments, this paper is interesting and solid, in part thanks to the excellent reply to the pre-review questions and the subsequent improved revision. This leads me to believe the authors are more than capable of following to a significant extent the aforementioned suggestions for improvement, thus leading to an even better paper. |
ICLR | Title
VICE: Variational Inference for Concept Embeddings
Abstract
In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
1 INTRODUCTION AND RELATED WORK
Human knowledge about object concepts encompasses many types of information, ranging from function to visual appearance, as well as encyclopedic facts or taxonomic characteristics. This knowledge supports the identification of objects, inferences about what interactions they support, or what the effects of such interactions in the environment will be. Key questions for cognitive scientists modelling human performance in experiments are 1) which of this information is accessible to participants and 2) how is it used across different tasks. Several studies (McRae et al., 2005; Devereux et al., 2013; Buchanan et al., 2019; Hovhannisyan et al., 2021) have asked subjects to list properties for hundreds to thousands of objects, yielding thousands of answers about the types of information above. Properties exist at many levels, ranging from categorization (e.g. "is an animal") to very specific facts (e.g. "is eaten in France"). Objects are implicitly represented as a vector of binary properties. This approach is agnostic to downstream prediction tasks, but does not provide an indication of which properties are more important – other than frequency of listing – and does not allow for graded property values. An alternative approach is for researchers to postulate dimensions of interest, and then ask human subjects to place each object in each dimension. An example is Binder et al. (2016), who collected ratings for hundreds of objects, as well as verbs and adjectives, in 65 dimensions reflecting sensory, motor, spatial, temporal, affective, social, and cognitive experiences.
The overall problem is then one of discovering a representation for objects that is not biased by a particular task, and is interpretable without requiring researchers to postulate the types of information represented. Several researchers have tried to develop interpretable concept representation spaces from text corpora, via word embeddings with positivity and sparsity constraints (Murphy et al., 2012), topic model representations of Wikipedia articles about objects (Pereira et al., 2013), transformations of word embeddings into sparse, positive spaces (Subramanian et al., 2018; Panigrahi et al., 2019) or predictions of properties (Devereux et al., 2013) or dimensions (Utsumi, 2020), or text corpora combined with imaging data (Fyshe et al., 2014) or with object images (Derby et al., 2018). Finally, Derby et al. (2019) introduced a neural network mapping the sparse feature space of a semantic property norm to the dense space of a word embedding, identifying informative combinations of properties or allowing ranking of candidate properties for arbitrary words.
Recently, Zheng et al. (2019) and Hebart et al. (2020) introduced SPoSE, a model of the mental representations of 1,854 objects in a 49-dimensional space. The model was derived from a dataset of
1.5M Amazon Mechanical Turk (AMT) judgments of object similarity, where subjects were asked which of a random triplet of objects was the odd one out. The model embedded each object as a vector in a space where each dimension was constrained to be sparse and positive. Triplet judgments were predicted as a function of the similarity between embedding vectors of the three objects considered. The authors showed that these dimensions were predictable as a combination of elementary properties in the Devereux et al. (2013) norm, which often co-occur across many objects. Hebart et al. (2020) further showed that 1) human subjects could coherently label what the dimensions were “about”, ranging from categorical (e.g. is animate, food, drink, building) to functional (e.g. container, tool) or structural (e.g. made of metal or wood, has inner structure). Subjects could also predict what dimension values new objects would have, based on knowing the dimension value for a few other objects. These results suggest that SPoSE captures core object knowledge that subjects use. Navarro & Griffiths (2008) introduced a related method for learning semantic concept embeddings from similarity data, which infers the number of latent dimensions using the Indian Buffet Process (IBP, Griffiths & Ghahramani (2011)), but their approach is not directly applicable to our setting due to reliance on continuous-valued similarity ratings instead of forced-choice behavior. Furthermore, it is known to be challenging to scale the IBP to the number of features and observations considered in our work (Ghahramani, 2013). Roads & Love (2021) introduced a related method for deriving an object embedding from behavior in a 8-rank-2 task. Their method aimed to predict behavior from the embeddings, using active sampling to query subjects with the most informative stimuli. The method was not meant to produce interpretable dimensions, but rather construct object similarity matrix as efficiently as possible.
There is growing interest by cognitive scientists in using SPoSE, as it makes it possible to discover an item representation for any kind of item amenable to an odd-one-out comparison in a triplet task. Furthermore, the combination of positivity and sparsity constraints in each dimension of the representation leads to interpretability by human subjects: no item is represented by every dimension, and most dimensions are present for only a few items. That item representation can then be used within other behavioral prediction models, to make predictions about neuroimaging data, etc.
For this potential to be realized, however, we believe a number of issues with SPoSE should be addressed. The first is the use of an l1 sparsity penalty to promote interpretability of dimensions. l1 achieves sparsity at the cost of unnecessarily shrinking larger values (Belloni & Chernozhukov, 2013). In SPoSE, 6-11 dominant dimensions for an object account for most of the prediction performance; the cost of removing irrelevant dimensions is to potentially make dominant dimensions smaller than they should be, and affect performance. Second, the l1 penalty is analogous to having a Laplace prior over those values. If we consider the distribution of values across objects for the two most important SPoSE dimensions, in Figure 1, we can see that they have a bimodal distribution, with a spike around 0 and a much smaller, wide slab of probability for non-zero values, which is not Laplace. Overcoming this "wrong" prior requires more data than strictly necessary to learn the representation. SPoSE was developed with a dataset that was orders of magnitude larger than what a typical experiment might collect, but it was never tested on smaller datasets. Finally, SPoSE uses a heuristic, subjective criterion for determining how many dimensions the solution should have.
In this paper we introduce VICE, an approach for variational inference of object concept embeddings in a space with interpretable sparse, positive dimensions, which addresses the SPoSE issues identified above. Specifically, we encourage sparsity and small weights by using a spike-and-slab prior. This is more appropriate than a Laplace prior, because importance – the value an object takes in a dimension – is different from relevance – whether the dimension matters for that object – and they can be
controlled separately with a spike-and-slab prior. The prior hyperparameters are meant to be intuitive to a user, and to make it easier to specify hypotheses about dimensional structure. We use variational Bayes both because it is a Bayesian approach, and also because it assumes a unimodal posterior for the loading of each object in each dimension. It also allows a more principled procedure for determining how many dimensions the model should have, by taking into account uncertainty about their values. We compare our model with SPoSE over different subsets of the dataset used to develop it, and verify that it performs as well or better by various criteria: prediction of behavior, calibration of the prediction of decision probabilities, and reproducibility of solutions across seeds. Importantly, it has significantly better performance on smaller datasets (5− 10% of the original SPoSE dataset). Our implementation of VICE is available on GitHub1, and will be de-anonymized upon acceptance.
2 METHODS
2.1 ODD-ONE-OUT TASK
The odd-one-out task is motivated by the problem of discovering object embeddings based on similarity judgments involving a set of m different object concepts, which we will denote by c1, . . . , cm (e.g. c1 = ‘aardvark’,. . . , c1854 = ‘zucchini’). These similarity judgments are collected from human participants, who are given queries which consist of a ‘triplet‘ of three concepts {ci1 , ci2 , ci3}, for instance, {c268, c609, c1581} = {‘suit’, ‘flamingo’, ‘car’}. Participants are asked to consider the three pairs within the triplet {(ci1 , ci2), (ci1 , ci3), (ci2 , ci3)}, and to decide which item had the smallest similarity to the other two (the "odd-one-out"). This is equivalent to choosing the pair with the greatest similarity. Let (y1, y2) denote the indices in this pair, e.g. for ‘suit’ and ‘flamingo’ they would be (y1, y2) = (268, 609). A dataset D is a set of N pairs of concept triplets and one-hot vectors that correspond to the index two most similar concepts, i.e. ({ci1 , ci2 , ci3}, (yi1 , yi2)).
2.2 SPOSE
Sparse Positive object Similarity Embedding (SPoSE) (Zheng et al., 2019) is an approach for finding interpretable item dimensions from an odd-one-out task. It does so by finding an embedding vector xi = (xi1, . . . , xip) for every item ci. The similarity Sij of two items (e.g. ci and cj) is computed by the dot product of the corresponding embeddings (i.e. xi and xj), Sij = 〈xi,xj〉 From these similarities, the probability of choosing (yi1 , yi2) as the most similar pair of items given the item triplet {ci1 , ci2 , ci3} and given embedding vectors {xi1 , xi2 , xi3} is computed as:
p((yi1 , yi2)|{ci1 , ci2 , ci3}, {xi1 , xi2 , xi3}) = exp(Syi1 ,yi2 )
exp(Si1,i2) + exp(Si1,i3) + exp(Si2,i3) . (1)
SPoSE uses a maximum a posterori (MAP) estimation to find the most likely embedding given the training data and a prior:
argmax X log p(X|Dtrain) = argmax X log p(Dtrain|X) + log p(X), (2)
1Link to anonymous GitHub repository: https://anonymous.4open.science/r/VICE-59F0
where X is a matrix containing the embedding vectors for all of the items and p(X) is a prior for the embeddings, and p(Dtrain,j |X) is defined in (7). To induce sparsity in the embeddings, SPoSE uses a mean-field Laplace prior, leading to this objective:
argmax X ntrain∑ j=1 log p(Dtrain,j |X) + λ m∑ i=1 ||xi||1 (3)
Here, || · ||1 is the l1 norm, so ||x||1 = ∑p f=1 |xf |, and xf ≥ 0 for f = 1, . . . , p. The regularization parameter, λ, is selected out of a grid of candidate values by choosing the one that achieves the lowest (average) cross-entropy on the validation set (across twenty random seeds). The final dimensionality of the embedding, p, is determined heuristically from the data. If p is set to be larger than the number of dimensions supported by the data, the SPoSE algorithm will shrink entire dimensions towards zero by removing weights with a magnitude less than a given absolute threshold. While a threshold of 0.1 is suggested (Zheng et al., 2019), no justification is given for that particular value, which is problematic given that the number of dimensions removed is quite sensitive to that choice.
2.3 VICE
2.3.1 VARIATIONAL BAYESIAN INFERENCE
Given the goal of better approximating p(X|Dtrain), we use variational inference. We approximate p(X|Dtrain) with a variational distribution, qθ(X), where q is our chosen family of distributions, and θ is a parameter that is learned in order to optimize the Kullback–Leibler (KL) divergence to the true posterior, p(X|Dtrain). In variational inference, the KL divergence objective function is:
argmin θ Eqθ(X)
[ 1
ntrain (log qθ(X)− log p(X))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|X)
] (4)
In order to use variational inference, a parametric variational distribution must be chosen. For VICE, we use a Gaussian variational distribution with a diagonal covariance matrix qθ(X) = N (µ, diag(σ2)), where the learnable parameters θ are µ and σ. This means that each embedding dimension has a mean, the most likely value for that dimension, and a standard deviation, the propensity of the embedding value to be close to the mean.
Similarly to Blundell et al. (2015), we use a Monte Carlo (MC) approximation of the above objective function by sampling a limited number of Xs from qµ,σ(X) during training. We generate X by means of the reparameterization trick (Kingma & Welling, 2013), Xθ, = µ+ σ · where is an N × p matrix of standard normal variates, leading to the objective:
argmin θ
1
m m∑ j=1
[ 1
ntrain (log qθ(Xθ, j )− log p(Xθ, j ))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|[Xθ, j ]+ ] (5)
where j ∈ RN×p is entrywise N (0, 1) and where []+ is the ReLU function. As commonly done in the dropout and Bayesian neural network literature (Srivastava et al., 2014; Blundell et al., 2015; Gal & Ghahramani, 2016; McClure & Kriegeskorte, 2016), we set m to 1 for computational efficiency.
In Equation 5, the expected log-likelihood of the entire training data is computed. However, using the entire training data set to compute the gradient update often works poorly for non-convex objective functions. This is due to the expensive computational cost of each update and to the convergence to poorly generalizing solutions (Smith et al., 2020). As a result, we stochastically approximate (Robbins & Monro, 1951) the training log-likelihood using random subsets (i.e. mini-batches) of the training dataset, with each mini-batch consisting of b triplets. This leads to the final objective
argmin θ
1
ntrain (log qθ(Xθ, )− log p(Xθ, ))−
1
b b∑ i=1 log p(Dtrain,i|[Xθ, ]+)] (6)
recalling that
p(Dtrain,i|X) = exp(xy1,i
Txy2,i)
exp(xi1,i Txi2,i) + exp(xi1,i Txi3,i) + exp(xi2,j Txi3,i)
. (7)
2.3.2 SPIKE-AND-SLAB PRIOR
A key feature of SPoSE is sparsity. As discussed above, SPoSE induced sparsity using a zero-mean Laplace prior. We can empirically examine whether the Laplace prior is a realistic assumption, given the distribution of values in SPoSE dimensions. As Figure 1 depicts, these histograms do not resemble a Laplace distribution. Instead, it looks like there is a "spike" of probability at zero and a much smaller, but wide, "slab" of probability for the non-zero values of a SPoSE dimension. To model this, we use a spike-and-slab Gaussian mixture prior, as introduced in Blundell et al. (2015):
p(X) = N∏ i=1 p∏ f=1 (πN (xif ; 0, σ2spike) + (1− π)N (xif ; 0, σ2slab)) (8)
This prior has three parameters. π is the probability that an embedding dimension will be drawn from the spike Gaussian instead of the slab Gaussian. The standard deviations σspike and σslab control the likelihood of of an embedding value being set to 0 in the spike or slab distributions, respectively. xif is the embedding weight for the ith item in the f th dimension. Since spike and slab distributions are mathematically interchangeable, by convention we require that, σspike << σslab. In our experiments, these are chosen with grid search on one half of the validation set, the “tuning set”. (The other half of the validation set, the “pruning set”, is used for dimensionality reduction, as we describe in §2.3.4.)
2.3.3 PREDICTING THE ODD-ONE-OUT USING VICE
In this work, we consider two different prediction problems given a new triplet: (1) predicting the choice and (2) predicting the distribution of the choice. In either case, we start with computing the posterior probability distribution over the three triplet choices. If predicting a choice, then we output the choice with the maximum posterior probability. (For details on how we handle ties, see §3.3.1.) If the goal is to predict the distribution, then we return the predicted distribution.
The predicted probability distribution is computed from the variational posterior, qθ(X). When making predictions, we want to compute the probability of an odd-one-out for a given triplet. We approximate this probability by using an MC estimate fromm samplesXj = Xθ, j for j = 1, . . . ,m (Graves, 2011; Blundell et al., 2015; Kingma & Welling, 2014; McClure & Kriegeskorte, 2016; Blei et al., 2017). Mathematically, this means that we compute the predicted distribution as
p̂((yi1 , yi2)|{ci1 , ci2 , ci3}) ≈ 1
m m∑ j=1 p((yi1 , yi2)|{ci1 , ci2 , ci3}, {[x j i1 ]+, [x j i2 ]+, [x j i3 ]+}). (9)
2.3.4 DIMENSIONALITY REDUCTION FOR VICE
For interpretability purposes, it is crucial that the model does not use more dimensions than necessary. Zheng et al. (2019) accomplished this through the sparsity-inducing penalty that causes dimensions to shrink towards zero if they do not contribute to explaining the data. (see Section 2.2). Note, however, that these uninformative weights do not totally go to zero because of noise in the gradients. Hence, these dimensions were pruned by choosing a threshold for the L1 norm of the dimension based on looking at the “elbow plot” of the sorted L1 norms of the dimensions; this approach is subjective and highly dependent on the specific dataset. In VICE, the KL penalty we use has a similar effect of causing uninformative weights to shrink. Rather than using a user-defined threshold to prune dimensions, VICE exploits the uncertainty information obtained in training the model to select a set of informative dimensions. The pruning procedure consists of three steps: (1) assigning an importance score to each dimension; (2) clustering dimensions by importance; and (3) choosing the subset of clusters that best explains the validation set. We describe each of these three steps in detail below.
Assign an importance score to each dimension Intuitively, the importance score reflects the number of objects that we can confidently say have non-zero weight in a dimension. To compute the score, we start by using the variational embedding for each item i – location µij and scale σij parameters, to compute the posterior probability that the weight will be truncated to zero according to the left tail of a Gaussian distribution with that location and scale (as described in §2.3.1). This gives us a posterior probability of the weight taking the value zero for each item within a dimension (Graves, 2011). To calculate the overall importance of a dimension, we estimate the number of items that plausibly have non-zero weights given a user-specified False Discovery Rate target (FDR) (Benjamini & Hochberg, 1995). FDR provides a method for inferring the number of hypotheses which are non-null, based on an array of p-values, with statistical guarantees on the expected proportion of false rejections. We define dimension importance as the number of rejections given by the BH(q) algorithm, with the FDR tolerance q specified by the user, using the posterior zero-probabilities as the p-values.
Cluster dimensions by importance using a Gaussian mixture model Given the importance scores in the previous step, a reasonable approach would be to sort dimensions by importance, and then use the left-out half of the validation set, or “pruning set”, to determine the k most important dimensions to include. However, we found that this approach led to high variance, due to the existence of groups of dimensions with very similar importance scores. We hypothesized that these groups of dimension corresponded to different feature types, as observed in Zheng et al. (2019). As McRae et al. (2005) discusses, these features can be grouped into different feature types, such as categorical, functional, encyclopedic, visual-perceptual and non-visual-perceptual. Therefore, the second step in our pruning method creates clusters of dimensions that have similar importance. We fit GMMs with varied number of components k (e.g. k ∈ {1, 2, ..., 6}) to the importance scores for each dimension, and find the number of components/modes that show the lowest Bayesian Information Criterion (BIC). Here, we limit the number of possible clusters to 6, as a conservative estimate on the number of distinct feature types (e.g. categorical, functional, perceptual) with possibly differing sparsity ranges–i.e., categorical features may apply to a large subset of items, while specific visual features might apply only to a handful. We cluster dimensions into k modes.
Choosing the subset of dimension clusters that best explains the validation set We find the best non-empty subset of clusters of dimensions, in terms of cross-entropy on the validation “pruning” set, and prune all clusters of dimensions outside of this subset. (If a given feature is uninformative, then features with similar importance scores are likely to be similarly uninformative.)
3 EXPERIMENTS
3.1 DATA
We used two datasets from Zheng et al. (2019), selected after quality control. The first contained judgments on 1,450,119 randomly selected triplets. We used a random subsample of 90% of these triplets for the training set, and the remaining 10% for the validation set (tuning and pruning). The second was an independent test set of 19,968 triplets with 25 repeats for each of 1,000 randomly selected triplets; none of these were present in the training set. Having this many repeats allows us to be confident of the response probability for each triplet. Furthermore, it allows us to establish a model-free estimate of the Bayes accuracy, the best possible accuracy achievable by any model.
3.2 EXPERIMENTAL SETUP
Training We implemented both SPoSE and VICE in PyTorch (Paszke et al., 2019) using Adam (Kingma & Ba, 2015) with α = 0.001. To guarantee a fair comparison between VICE and SPoSE, each model configuration was trained using 20 different random seeds, for a fixed number of 1000 epochs. Each model was initialized with a weight matrix, W ∈ RD×N , where D was set to 100 and N refers to the number of unique items in the dataset (i.e., 1854). In preliminary experiments, we observed that, after pruning, no model was left with a latent space of more than 100 dimensions, which is why we did not consider models with higher initial dimensionality.
Other details Please see section §A.1 for weight initialization and hyperparameter tuning.
3.3 PREDICTION EXPERIMENTS
3.3.1 EVALUATION MEASURES
Prediction accuracy Since human triplet choices are represented as three-dimensional one-hotvectors, where 1 represents the odd-one-out choice for a particular triplet, it is simple to compare them with model choices. The choice of a model is computed as argmax p(ŷ|θ), where p(ŷ|θ) refers to a model’s softmax probability distribution over a triplet given the model parameters (see Equation 9). If there is a tie in the softmax output, we regard this as an incorrect choice. A model can either be correct or incorrect, and no partial credit is given, guaranteeing a conservative measure of a model’s prediction behavior. The reported prediction accuracy is the fraction of trials where the model predicted the correct odd-one-out item. We can get an estimated upper bound on the Bayes accuracy, i.e., the best possible accuracy of any model, by using the repeats in the independent test set. As the optimal model predicts the repeat majority outcome for any triplet, this accuracy ceiling – 0.673 – is the average probability of the majority outcome over the set of all triplets.
Predicting Human Uncertainty The triplet task is subjective: there is no correct answer to any given triplet, and often subjects give all three. The independent test set gives us the probability distribution over answers for each triplet, graded information about the relative similarities of the three item pairs. Predicting this distribution precisely is a more stringent test of model quality than prediction accuracy, and of even more relevance in cognitive science applications. We quantify this through the KL divergence between the softmax probabilities of a model (see Section 2.3.3) and the empirical human probability distributions, obtained by computing discrete probability distributions for triplet repeats on the independent test set (see Section 3.1). We use the KL divergence because it is a commonly used measure for assessing the similarity between two probability distributions.
3.3.2 EXPERIMENT RESULTS
Full dataset We compared pruned median models of VICE and SPoSE, where the median model was identified by the median cross-entropy error on the tuning set. For VICE, we set the number of MC samples to m = 50 (see Equation 9) . On the independent test set, VICE and SPoSE achieved a similarly high prediction accuracy of 0.6380 and 0.6378, respectively, versus a chance-level accuracy of 0.333(3). Likewise, VICE and SPoSE achieved similar KL-divergences of 0.103 and 0.105, respectively, versus a chance-level KL-divergence of 0.366. The differences between the median model predictions across individual triplets in the test set were not statistically significant, under the null hypothesis according to a two-sided paired t-test, for either accuracy or KL-divergence. Hence, VICE and SPoSE predicted triplets equally well when they were both trained on the full dataset. This is not surprising, as Bayesian methods based on MC sampling become more like deterministic Maximum Likelihood Estimation (MLE) the more training data is available, per the Bernstein-von Mises theorem (Doob, 1949). As a result, the effects of the prior are most prominent when models are trained on datasets where ntrain is not particularly large, as we will see in the next section.
Efficiency on smaller datasets Performance on small datasets is especially important in cognitive science, where behavioral experiments often have low sample sizes (e.g. tens to hundreds of volunteer
in-lab subjects ) or can be costly to scale in AMT. To test whether VICE can model the data better than SPoSE when data are scarce, we created non-overlapping subsets of the training dataset. Specifically, we did this for subsets with sizes equal to 5%, 10%, 20%, and 50% of the dataset, yielding 20, 10, 5, and 2 subsets, respectively. Validation and test sets were unchanged. In Figure 3, we show the average prediction accuracy and KL divergence across random seeds, for models trained on every dataset size, including the full training set. Averages were computed across both random seeds and training subsets, where the average over random seeds was identified first to get a per-subset estimate; performance across subsets was then used to compute the confidence intervals (CIs). Figure 3 shows that the difference in prediction accuracy and KL divergence between VICE and SPoSE became more pronounced the fewer triplet samples were used for training. The difference was striking for the 5% and 10% data subsets, with ≈ 67, 500 and ≈ 135, 000 triplets, respectively. In the former, SPoSE predicted at chance-level; in the latter, it showed a large variation between random seeds and data splits, as can be seen in the 95% CIs in Figure 3. In both low-resource scenarios, VICE showed a compellingly small variation in the two performance metrics across random seeds, and predicted much better than chance-level. The differences between VICE and SPoSE for the 5% and 10% subsample scenarios were statistically significant according to a two-sided paired t-test (p < 0.001), comparing individual triplet predictions between the pruned median models.
3.4 REPRODUCIBILITY EXPERIMENTS
Beyond predictive performance, a key criterion for learning concept representations is reproducibility, i.e., learning similar representations when using different random initializations on the same training data. To assess this, we compare 20 differently initialized VICE and SPoSE models.
The first aspect of reproducibility is finding similar numbers of dimensions, quantified as the standard deviation of that number across all 20 models. As shown in Table 1, VICE identified fewer dimensions than SPoSE, and this had a lower standard deviation across models (1.64 vs 2.30) The difference in standard deviation is, however, not statistically significant according to a two-sided F-test (F = 0.516, df = 19, p = 0.918). The second aspect is the extent to which the dimensions identified are similar across initializations. Since the embedding is not an ordered set of dimensions, we will deem a dimension learned in one VICE model reproducible if it is present in another independently trained instance of VICE, perhaps in a different column or with some small perturbation to the weights. To evaluate the number of highly reproducible dimensions we correspond each embedding dimension of a given initialization (after the pruning step) with the most similar embedding dimension (in terms of Pearson correlation) of a second initialization. Given 20 differently initialized models, we quantify reproducibility of a dimension as the average Pearson correlation between one dimension and its best match across the 19 remaining models. In Table 1, we report the average number of dimensions with a Pearson correlation > 0.8 across the 20 initializations. Selected dimensions are similarly reproducible between VICE and SPoSE (see Table 1). Finally, we investigated whether our uncertainty-based pruning procedure selects reproducible dimensions. We compared the average reproducibility of selected dimensions with the average reproducibility of pruned dimensions, which are discarded by the procedure The average reproducibility of the best subset, i.e., % of dimensions with Pearson’s r > 0.8, is 79.00%, whereas that of the dimensions that were pruned was 0.00%, i.e. our procedure is highly accurate at identifying those dimensions that reproduce reliably.
3.5 INTERPRETABILITY
One of the benefits of SPoSE is the interpretability of the dimensions of its concept embeddings, induced by sparsity and positivity constraints, and empirically tested through experiments in Hebart et al. (2020). VICE constrains the embeddings to be sparse through the spike-and-slab prior, and imposes a non-negativity constraint through applying a rectifier on its latent representations. This
means that, just as in SPoSE, it is easy to sort objects within a VICE dimension by their absolute weight values in descending order, to obtain human judgments of what a dimension represents. In Figure 4, we show the top six objects for four example VICE dimensions of the pruned median model, representing categorical, functional, structural, and visual information. In Appendix A.2 we show the top ten objects for every VICE dimension. Redoing the SPoSE dimension labeling experiments is beyond the scope of this paper, but we provide dimension labels from a small survey in the Appendix.
4 DISCUSSION
In this paper, we introduced VICE, a novel approach for embedding concepts in a non-negative, sparse space, and using those embeddings to predict human behavior in an odd-one-out task. We solve the same problem as an existing method, SPoSE, but using variational inference and a spikeand-slab prior, which is more appropriate for this modeling situation. VICE yields uncertainty information about the solution, enabling a statistical procedure to automatically determine the number of embedding dimensions, as opposed to the data-dependent heuristics that were used in SPoSE. VICE performs as well as SPoSE in terms of accurately predicting human decisions in an odd-one-out task and modeling the probability distribution over those decisions, but using fewer dimensions. However, this is the case only for the large dataset that was originally used to develop SPoSE. VICE performs substantially better than SPoSE on smaller datasets. Moreover, VICE is more stable than SPoSE, as the dimensionality of the embeddings varies less across random initializations. We believe these improvements stem from the combination of the prior and the dimension selection procedure.
We developed VICE with the goal of making it easier to build interpretable embedding spaces to model any type of item, by using an odd-one-out task. We require fewer participants than SPoSE due to higher data efficiency, which makes behavioral experiments more feasible. Our procedure for determining the number of dimensions aims at removing subjectivity, as the scientific motivation often is to discover the minimum number of latent factors required to describe observations. Note that while the user does need to choose the FDR tolerance q, this can be done before looking at any data, based on their degree of conservatism with regards to controlling false discoveries. Hence, it does not cause the problems associated with data-dependent tuning parameters such as regularization parameters or absolute thresholds. Finally, the spike-and-slab prior we use in VICE has intrinsically meaningful hyperparameters, which makes it easier for researchers to specify competing hypotheses about the representation space being studied.
A APPENDIX
A.1 EXPERIMENTAL SETUP
Weight initialization We initialized the weights of the encoder for the means of the distributions, Wµ, following a Kaiming He initialization (He et al., 2015). The weights of the encoder for the logarithm of the scales of the distributions, Wlog (σ), were initialized with = − 1sW0µ , such that W 0log(σ) = 1. This initialization allowed us to avoid bias terms within the linear transformations of the encoders, and additionally ensured, through computing σ = exp (log (σ)), that σ is a small continuous number in R+ at the beginning of training.
Hyperparameter grid To find the optimal VICE hyperparameter combination, we performed a grid search over π, σspike, σslab (see Equation 8). The final grid was the Cartesian product of these parameter sets: π = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, σspike = {0.125, 0.25, 0.5, 1.0, 2.0}, σslab = {0.25, 0.5, 1.0, 2.0, 4.0, 8.0}, subject to the constraint σspike << σslab, where combinations that did not satisfy the constraint were discarded. We observed that setting σslab > 8.0 led to numerical overflow issues during optimization, which is why 23 was the upper-bound for σslab. For SPoSE, we use the same range as Zheng et al. (2019), with a finer grid of 64 values.
Optimal hyperparameters We found the optimal VICE hyperparameter combination through a two step procedure, for which the validation set was split into equally sized pruning and tuning sets. First, among the final 180 combinations (see Cartesian product above), we applied our pruning method (see Section 2.3.4) to each model and kept the subsets of dimensions that led to the lowest cross-entropy error on one half of the validation set, which we call pruning set. Second, we evaluated each model with pruned parameters on the other half of the validation set, which we refer to as tuning set. We defined the optimal hyperparameter combination as that with the lowest average cross-entropy error on the tuning set across twenty different random initializations. The optimal hyperparameter combinations for VICE were σspike = 0.125, σslab = 1.0, π = 0.4; for SPoSE, it was λ = 5.75.
A.2 OBJECT DIMENSIONS
Here, we display the top 10 objects for each of the 47 VICE dimensions, according to their absolute weight value. As we have done for every other experiment, we used the pruned median model to guarantee the extraction of a representative sample of object dimensions without being over-optimistic with respect to their interpretability (see Section 3.3.1 for how the median model was identified). For each dimension we collected human responses in a small survey with a sample of convenience (n = 9). The labels that are shown below each object dimension represent the most common answer across human responses, when they were asked to name the respective dimension. More than one label is displayed, whenever there was a tie in the most common response. Labels were edited for coherence across similar answers (e.g. "metallic" and "made of metal" were deemed to be the same answer).
While the illustrations for each dimension display the top 10 items, for our survey, in order to avoid biasing our results, we actually show a continuum of items selected from bins centered around the 25, 50, and 75 percentiles in addition to top items, and a random set of items with close to zero weight in the dimension.
METALLIC FOOD
PLANTS ANIMAL
HOME CLOTHES
OUTDOOR WOOD; MADE OF WOOD
POINTY; ELONGATED BODY PARTS
VEHICLE; TRANSPORTATION EXQUISITE; TRADITIONAL
ELECTRONIC COLORFUL
ROUND; CIRCULAR MANY OBJECTS; COLLECTION
STATIONERY; OFFICE SPORTS; GAMES
DECORATIVE; BEAUTIFUL CONTAINER; DRINKS
MARINE; WATER RED
BATHROOM; HYGIENE WAR; WEAPON
BLACK DUST; GRAINY TEXTURE
SPHERICAL; ROUND GREEN
WHITE SKY; FLYING
FLOOR; PATTERN LINES; GRATING PATTERN
MUSIC; SOUND SKY; TALL
INSECTS; PESTS N/A
FIRE; SMOKE FOOT; FOOTWEAR
CHAIN; ROPE; STRAND YELLOW; ORANGE
EYEWEAR; EYES; FACE SPIKY; HAIRY
CYLINDRICAL; ELONGATED STRINGY; FIBROUS
BABY; CHILDREN MEDICAL; HEALTHCARE
ICE; COLD | 1. What is the main contribution of the paper regarding image embeddings?
2. What are the strengths of the proposed method compared to previous works like SpOSE?
3. How does the reviewer assess the significance and impact of the contributions made by the paper?
4. What are the concerns or limitations of the paper, particularly regarding its motivation and applications?
5. Are there any questions or suggestions regarding the experimental design, analysis, or comparisons with other methods? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a technique for learning image embeddings from similarity data provided as odd-one-out judgments over triplets of images (i.e., ball is more similar to apple than car). The authors build on an earlier technique called SPoSE that learns sparse, non-negative embeddings for images by maximizing the probability of choosing the right pair (where this similarity is calculated using the dot-product of image embeddings) with an L1 penalty on embeddings. In this paper, the authors argue that 1) a spike and slab prior is more suited for this setting and 2) a more principled approach to choosing the number of dimensions in learned embeddings is possible.
They present an improved version of SpoSE (called VICE) that uses variational inference to learn embeddings with uncertainties associated with them and place a spike-and-slab prior on the embeddings to encourage sparsity. They then present a way to prune the set of learned dimensions. This calculates the importance of each dimension first (using the learned uncertainties and false discovery rate), then clusters these, and finds the subset of clusters that give the best validation performance.
They argue that their technique is more principled than SpOSE and show that it performs similarly to it in terms of prediction accuracy. However, in small data regime, their model outperforms SpOSE.
Review
Overall, I think it is an interesting paper. I like the spike-and-slab prior and the variational treatment. I agree with the authors that these are better suited to this setting.
My main concern with the paper is that the motivation is not really clear. What would these embeddings be used for after learning? It looks like the authors have some applications in cognitive science in mind but I wasn't sure what these are exactly. How will these embeddings be useful in cognitive science experiments? I think a case study of where these embeddings were used would add significantly to the paper. Also, it would be nice to motivate this technique for the larger ICLR audience as well (not only cognitive scientists).
My other concern with the paper is that it's not very clear if the contributions are significant enough.
VICE performs similarly to SpOSE in general, only except it outperforms SpOSE on smaller datasets. However, it wasn't clear to me how important this is. Mainly because the smallest dataset in the experiments, which has 67.500 triplets, doesn't seem very small to me from a psychology experiment perspective. (Assuming an experiment with 100 subjects, which is pretty large, perhaps you could collect 10.000 triplets). It would be very valuable to see the performance of VICE on even smaller datasets.
I agree the pruning strategy is more principled than SpOSE's but it still seemed unusual to me in a couple of respects. First, it wasn't clear to me why you would cluster the dimension importance scores (and assume a max of 6 dimensions). I think more details on this in the paper would be useful. For example, are you clustering a single score per dimension, or large vectors of importance scores (one for each sample) per dimension? And why would you prune by cluster? If a cluster has features that are similar to each other, wouldn't you want to keep at least some from each cluster and prune the rest (because they are redundant)? And finally, how about just sorting the dimensions by importance and taking the top N? How well does this work compared to the clustering strategy?
Finally, I'd also have liked to see comparisons to other techniques. I understand that the technique is built on SpOSE but there are many techniques that would be applicable to this problem. One class of methods that are especially suited to this setting are non-parametric Bayesian models (like Indian Buffet Process)[1] that can determine the optimal number of dimensions (without any explicit pruning strategy). One can also use a large multi-modal model like CLIP (OpenAI) to get embeddings for images and it might be interesting to see how well these do. Or if the authors think these models are not applicable, then a brief discussion of why they aren't would be valuable I think.
Below are some other minor points
In 3.5, the authors mention that VICE embeddings are passed through a rectifier to make them non-negative. This should be mentioned earlier in the paper. Also, if the embeddings are non-negative, why not use a prior that also makes this assumption (rather than a gaussian)?
In 3.2, they mention variances are initialized to be small. Why? Generally, people initialize these to be close to 1.
A minor organization suggestion. At the end of page 6, they mention separate pruning and tuning validation sets. It would be nice to mention this earlier (when talking about pruning strategy for example).
[1] Navarro DJ, Griffiths TL (2008) Latent features in similarity judgments: A nonparametric Bayesian approach. Neural Computation. |
ICLR | Title
VICE: Variational Inference for Concept Embeddings
Abstract
In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
1 INTRODUCTION AND RELATED WORK
Human knowledge about object concepts encompasses many types of information, ranging from function to visual appearance, as well as encyclopedic facts or taxonomic characteristics. This knowledge supports the identification of objects, inferences about what interactions they support, or what the effects of such interactions in the environment will be. Key questions for cognitive scientists modelling human performance in experiments are 1) which of this information is accessible to participants and 2) how is it used across different tasks. Several studies (McRae et al., 2005; Devereux et al., 2013; Buchanan et al., 2019; Hovhannisyan et al., 2021) have asked subjects to list properties for hundreds to thousands of objects, yielding thousands of answers about the types of information above. Properties exist at many levels, ranging from categorization (e.g. "is an animal") to very specific facts (e.g. "is eaten in France"). Objects are implicitly represented as a vector of binary properties. This approach is agnostic to downstream prediction tasks, but does not provide an indication of which properties are more important – other than frequency of listing – and does not allow for graded property values. An alternative approach is for researchers to postulate dimensions of interest, and then ask human subjects to place each object in each dimension. An example is Binder et al. (2016), who collected ratings for hundreds of objects, as well as verbs and adjectives, in 65 dimensions reflecting sensory, motor, spatial, temporal, affective, social, and cognitive experiences.
The overall problem is then one of discovering a representation for objects that is not biased by a particular task, and is interpretable without requiring researchers to postulate the types of information represented. Several researchers have tried to develop interpretable concept representation spaces from text corpora, via word embeddings with positivity and sparsity constraints (Murphy et al., 2012), topic model representations of Wikipedia articles about objects (Pereira et al., 2013), transformations of word embeddings into sparse, positive spaces (Subramanian et al., 2018; Panigrahi et al., 2019) or predictions of properties (Devereux et al., 2013) or dimensions (Utsumi, 2020), or text corpora combined with imaging data (Fyshe et al., 2014) or with object images (Derby et al., 2018). Finally, Derby et al. (2019) introduced a neural network mapping the sparse feature space of a semantic property norm to the dense space of a word embedding, identifying informative combinations of properties or allowing ranking of candidate properties for arbitrary words.
Recently, Zheng et al. (2019) and Hebart et al. (2020) introduced SPoSE, a model of the mental representations of 1,854 objects in a 49-dimensional space. The model was derived from a dataset of
1.5M Amazon Mechanical Turk (AMT) judgments of object similarity, where subjects were asked which of a random triplet of objects was the odd one out. The model embedded each object as a vector in a space where each dimension was constrained to be sparse and positive. Triplet judgments were predicted as a function of the similarity between embedding vectors of the three objects considered. The authors showed that these dimensions were predictable as a combination of elementary properties in the Devereux et al. (2013) norm, which often co-occur across many objects. Hebart et al. (2020) further showed that 1) human subjects could coherently label what the dimensions were “about”, ranging from categorical (e.g. is animate, food, drink, building) to functional (e.g. container, tool) or structural (e.g. made of metal or wood, has inner structure). Subjects could also predict what dimension values new objects would have, based on knowing the dimension value for a few other objects. These results suggest that SPoSE captures core object knowledge that subjects use. Navarro & Griffiths (2008) introduced a related method for learning semantic concept embeddings from similarity data, which infers the number of latent dimensions using the Indian Buffet Process (IBP, Griffiths & Ghahramani (2011)), but their approach is not directly applicable to our setting due to reliance on continuous-valued similarity ratings instead of forced-choice behavior. Furthermore, it is known to be challenging to scale the IBP to the number of features and observations considered in our work (Ghahramani, 2013). Roads & Love (2021) introduced a related method for deriving an object embedding from behavior in a 8-rank-2 task. Their method aimed to predict behavior from the embeddings, using active sampling to query subjects with the most informative stimuli. The method was not meant to produce interpretable dimensions, but rather construct object similarity matrix as efficiently as possible.
There is growing interest by cognitive scientists in using SPoSE, as it makes it possible to discover an item representation for any kind of item amenable to an odd-one-out comparison in a triplet task. Furthermore, the combination of positivity and sparsity constraints in each dimension of the representation leads to interpretability by human subjects: no item is represented by every dimension, and most dimensions are present for only a few items. That item representation can then be used within other behavioral prediction models, to make predictions about neuroimaging data, etc.
For this potential to be realized, however, we believe a number of issues with SPoSE should be addressed. The first is the use of an l1 sparsity penalty to promote interpretability of dimensions. l1 achieves sparsity at the cost of unnecessarily shrinking larger values (Belloni & Chernozhukov, 2013). In SPoSE, 6-11 dominant dimensions for an object account for most of the prediction performance; the cost of removing irrelevant dimensions is to potentially make dominant dimensions smaller than they should be, and affect performance. Second, the l1 penalty is analogous to having a Laplace prior over those values. If we consider the distribution of values across objects for the two most important SPoSE dimensions, in Figure 1, we can see that they have a bimodal distribution, with a spike around 0 and a much smaller, wide slab of probability for non-zero values, which is not Laplace. Overcoming this "wrong" prior requires more data than strictly necessary to learn the representation. SPoSE was developed with a dataset that was orders of magnitude larger than what a typical experiment might collect, but it was never tested on smaller datasets. Finally, SPoSE uses a heuristic, subjective criterion for determining how many dimensions the solution should have.
In this paper we introduce VICE, an approach for variational inference of object concept embeddings in a space with interpretable sparse, positive dimensions, which addresses the SPoSE issues identified above. Specifically, we encourage sparsity and small weights by using a spike-and-slab prior. This is more appropriate than a Laplace prior, because importance – the value an object takes in a dimension – is different from relevance – whether the dimension matters for that object – and they can be
controlled separately with a spike-and-slab prior. The prior hyperparameters are meant to be intuitive to a user, and to make it easier to specify hypotheses about dimensional structure. We use variational Bayes both because it is a Bayesian approach, and also because it assumes a unimodal posterior for the loading of each object in each dimension. It also allows a more principled procedure for determining how many dimensions the model should have, by taking into account uncertainty about their values. We compare our model with SPoSE over different subsets of the dataset used to develop it, and verify that it performs as well or better by various criteria: prediction of behavior, calibration of the prediction of decision probabilities, and reproducibility of solutions across seeds. Importantly, it has significantly better performance on smaller datasets (5− 10% of the original SPoSE dataset). Our implementation of VICE is available on GitHub1, and will be de-anonymized upon acceptance.
2 METHODS
2.1 ODD-ONE-OUT TASK
The odd-one-out task is motivated by the problem of discovering object embeddings based on similarity judgments involving a set of m different object concepts, which we will denote by c1, . . . , cm (e.g. c1 = ‘aardvark’,. . . , c1854 = ‘zucchini’). These similarity judgments are collected from human participants, who are given queries which consist of a ‘triplet‘ of three concepts {ci1 , ci2 , ci3}, for instance, {c268, c609, c1581} = {‘suit’, ‘flamingo’, ‘car’}. Participants are asked to consider the three pairs within the triplet {(ci1 , ci2), (ci1 , ci3), (ci2 , ci3)}, and to decide which item had the smallest similarity to the other two (the "odd-one-out"). This is equivalent to choosing the pair with the greatest similarity. Let (y1, y2) denote the indices in this pair, e.g. for ‘suit’ and ‘flamingo’ they would be (y1, y2) = (268, 609). A dataset D is a set of N pairs of concept triplets and one-hot vectors that correspond to the index two most similar concepts, i.e. ({ci1 , ci2 , ci3}, (yi1 , yi2)).
2.2 SPOSE
Sparse Positive object Similarity Embedding (SPoSE) (Zheng et al., 2019) is an approach for finding interpretable item dimensions from an odd-one-out task. It does so by finding an embedding vector xi = (xi1, . . . , xip) for every item ci. The similarity Sij of two items (e.g. ci and cj) is computed by the dot product of the corresponding embeddings (i.e. xi and xj), Sij = 〈xi,xj〉 From these similarities, the probability of choosing (yi1 , yi2) as the most similar pair of items given the item triplet {ci1 , ci2 , ci3} and given embedding vectors {xi1 , xi2 , xi3} is computed as:
p((yi1 , yi2)|{ci1 , ci2 , ci3}, {xi1 , xi2 , xi3}) = exp(Syi1 ,yi2 )
exp(Si1,i2) + exp(Si1,i3) + exp(Si2,i3) . (1)
SPoSE uses a maximum a posterori (MAP) estimation to find the most likely embedding given the training data and a prior:
argmax X log p(X|Dtrain) = argmax X log p(Dtrain|X) + log p(X), (2)
1Link to anonymous GitHub repository: https://anonymous.4open.science/r/VICE-59F0
where X is a matrix containing the embedding vectors for all of the items and p(X) is a prior for the embeddings, and p(Dtrain,j |X) is defined in (7). To induce sparsity in the embeddings, SPoSE uses a mean-field Laplace prior, leading to this objective:
argmax X ntrain∑ j=1 log p(Dtrain,j |X) + λ m∑ i=1 ||xi||1 (3)
Here, || · ||1 is the l1 norm, so ||x||1 = ∑p f=1 |xf |, and xf ≥ 0 for f = 1, . . . , p. The regularization parameter, λ, is selected out of a grid of candidate values by choosing the one that achieves the lowest (average) cross-entropy on the validation set (across twenty random seeds). The final dimensionality of the embedding, p, is determined heuristically from the data. If p is set to be larger than the number of dimensions supported by the data, the SPoSE algorithm will shrink entire dimensions towards zero by removing weights with a magnitude less than a given absolute threshold. While a threshold of 0.1 is suggested (Zheng et al., 2019), no justification is given for that particular value, which is problematic given that the number of dimensions removed is quite sensitive to that choice.
2.3 VICE
2.3.1 VARIATIONAL BAYESIAN INFERENCE
Given the goal of better approximating p(X|Dtrain), we use variational inference. We approximate p(X|Dtrain) with a variational distribution, qθ(X), where q is our chosen family of distributions, and θ is a parameter that is learned in order to optimize the Kullback–Leibler (KL) divergence to the true posterior, p(X|Dtrain). In variational inference, the KL divergence objective function is:
argmin θ Eqθ(X)
[ 1
ntrain (log qθ(X)− log p(X))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|X)
] (4)
In order to use variational inference, a parametric variational distribution must be chosen. For VICE, we use a Gaussian variational distribution with a diagonal covariance matrix qθ(X) = N (µ, diag(σ2)), where the learnable parameters θ are µ and σ. This means that each embedding dimension has a mean, the most likely value for that dimension, and a standard deviation, the propensity of the embedding value to be close to the mean.
Similarly to Blundell et al. (2015), we use a Monte Carlo (MC) approximation of the above objective function by sampling a limited number of Xs from qµ,σ(X) during training. We generate X by means of the reparameterization trick (Kingma & Welling, 2013), Xθ, = µ+ σ · where is an N × p matrix of standard normal variates, leading to the objective:
argmin θ
1
m m∑ j=1
[ 1
ntrain (log qθ(Xθ, j )− log p(Xθ, j ))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|[Xθ, j ]+ ] (5)
where j ∈ RN×p is entrywise N (0, 1) and where []+ is the ReLU function. As commonly done in the dropout and Bayesian neural network literature (Srivastava et al., 2014; Blundell et al., 2015; Gal & Ghahramani, 2016; McClure & Kriegeskorte, 2016), we set m to 1 for computational efficiency.
In Equation 5, the expected log-likelihood of the entire training data is computed. However, using the entire training data set to compute the gradient update often works poorly for non-convex objective functions. This is due to the expensive computational cost of each update and to the convergence to poorly generalizing solutions (Smith et al., 2020). As a result, we stochastically approximate (Robbins & Monro, 1951) the training log-likelihood using random subsets (i.e. mini-batches) of the training dataset, with each mini-batch consisting of b triplets. This leads to the final objective
argmin θ
1
ntrain (log qθ(Xθ, )− log p(Xθ, ))−
1
b b∑ i=1 log p(Dtrain,i|[Xθ, ]+)] (6)
recalling that
p(Dtrain,i|X) = exp(xy1,i
Txy2,i)
exp(xi1,i Txi2,i) + exp(xi1,i Txi3,i) + exp(xi2,j Txi3,i)
. (7)
2.3.2 SPIKE-AND-SLAB PRIOR
A key feature of SPoSE is sparsity. As discussed above, SPoSE induced sparsity using a zero-mean Laplace prior. We can empirically examine whether the Laplace prior is a realistic assumption, given the distribution of values in SPoSE dimensions. As Figure 1 depicts, these histograms do not resemble a Laplace distribution. Instead, it looks like there is a "spike" of probability at zero and a much smaller, but wide, "slab" of probability for the non-zero values of a SPoSE dimension. To model this, we use a spike-and-slab Gaussian mixture prior, as introduced in Blundell et al. (2015):
p(X) = N∏ i=1 p∏ f=1 (πN (xif ; 0, σ2spike) + (1− π)N (xif ; 0, σ2slab)) (8)
This prior has three parameters. π is the probability that an embedding dimension will be drawn from the spike Gaussian instead of the slab Gaussian. The standard deviations σspike and σslab control the likelihood of of an embedding value being set to 0 in the spike or slab distributions, respectively. xif is the embedding weight for the ith item in the f th dimension. Since spike and slab distributions are mathematically interchangeable, by convention we require that, σspike << σslab. In our experiments, these are chosen with grid search on one half of the validation set, the “tuning set”. (The other half of the validation set, the “pruning set”, is used for dimensionality reduction, as we describe in §2.3.4.)
2.3.3 PREDICTING THE ODD-ONE-OUT USING VICE
In this work, we consider two different prediction problems given a new triplet: (1) predicting the choice and (2) predicting the distribution of the choice. In either case, we start with computing the posterior probability distribution over the three triplet choices. If predicting a choice, then we output the choice with the maximum posterior probability. (For details on how we handle ties, see §3.3.1.) If the goal is to predict the distribution, then we return the predicted distribution.
The predicted probability distribution is computed from the variational posterior, qθ(X). When making predictions, we want to compute the probability of an odd-one-out for a given triplet. We approximate this probability by using an MC estimate fromm samplesXj = Xθ, j for j = 1, . . . ,m (Graves, 2011; Blundell et al., 2015; Kingma & Welling, 2014; McClure & Kriegeskorte, 2016; Blei et al., 2017). Mathematically, this means that we compute the predicted distribution as
p̂((yi1 , yi2)|{ci1 , ci2 , ci3}) ≈ 1
m m∑ j=1 p((yi1 , yi2)|{ci1 , ci2 , ci3}, {[x j i1 ]+, [x j i2 ]+, [x j i3 ]+}). (9)
2.3.4 DIMENSIONALITY REDUCTION FOR VICE
For interpretability purposes, it is crucial that the model does not use more dimensions than necessary. Zheng et al. (2019) accomplished this through the sparsity-inducing penalty that causes dimensions to shrink towards zero if they do not contribute to explaining the data. (see Section 2.2). Note, however, that these uninformative weights do not totally go to zero because of noise in the gradients. Hence, these dimensions were pruned by choosing a threshold for the L1 norm of the dimension based on looking at the “elbow plot” of the sorted L1 norms of the dimensions; this approach is subjective and highly dependent on the specific dataset. In VICE, the KL penalty we use has a similar effect of causing uninformative weights to shrink. Rather than using a user-defined threshold to prune dimensions, VICE exploits the uncertainty information obtained in training the model to select a set of informative dimensions. The pruning procedure consists of three steps: (1) assigning an importance score to each dimension; (2) clustering dimensions by importance; and (3) choosing the subset of clusters that best explains the validation set. We describe each of these three steps in detail below.
Assign an importance score to each dimension Intuitively, the importance score reflects the number of objects that we can confidently say have non-zero weight in a dimension. To compute the score, we start by using the variational embedding for each item i – location µij and scale σij parameters, to compute the posterior probability that the weight will be truncated to zero according to the left tail of a Gaussian distribution with that location and scale (as described in §2.3.1). This gives us a posterior probability of the weight taking the value zero for each item within a dimension (Graves, 2011). To calculate the overall importance of a dimension, we estimate the number of items that plausibly have non-zero weights given a user-specified False Discovery Rate target (FDR) (Benjamini & Hochberg, 1995). FDR provides a method for inferring the number of hypotheses which are non-null, based on an array of p-values, with statistical guarantees on the expected proportion of false rejections. We define dimension importance as the number of rejections given by the BH(q) algorithm, with the FDR tolerance q specified by the user, using the posterior zero-probabilities as the p-values.
Cluster dimensions by importance using a Gaussian mixture model Given the importance scores in the previous step, a reasonable approach would be to sort dimensions by importance, and then use the left-out half of the validation set, or “pruning set”, to determine the k most important dimensions to include. However, we found that this approach led to high variance, due to the existence of groups of dimensions with very similar importance scores. We hypothesized that these groups of dimension corresponded to different feature types, as observed in Zheng et al. (2019). As McRae et al. (2005) discusses, these features can be grouped into different feature types, such as categorical, functional, encyclopedic, visual-perceptual and non-visual-perceptual. Therefore, the second step in our pruning method creates clusters of dimensions that have similar importance. We fit GMMs with varied number of components k (e.g. k ∈ {1, 2, ..., 6}) to the importance scores for each dimension, and find the number of components/modes that show the lowest Bayesian Information Criterion (BIC). Here, we limit the number of possible clusters to 6, as a conservative estimate on the number of distinct feature types (e.g. categorical, functional, perceptual) with possibly differing sparsity ranges–i.e., categorical features may apply to a large subset of items, while specific visual features might apply only to a handful. We cluster dimensions into k modes.
Choosing the subset of dimension clusters that best explains the validation set We find the best non-empty subset of clusters of dimensions, in terms of cross-entropy on the validation “pruning” set, and prune all clusters of dimensions outside of this subset. (If a given feature is uninformative, then features with similar importance scores are likely to be similarly uninformative.)
3 EXPERIMENTS
3.1 DATA
We used two datasets from Zheng et al. (2019), selected after quality control. The first contained judgments on 1,450,119 randomly selected triplets. We used a random subsample of 90% of these triplets for the training set, and the remaining 10% for the validation set (tuning and pruning). The second was an independent test set of 19,968 triplets with 25 repeats for each of 1,000 randomly selected triplets; none of these were present in the training set. Having this many repeats allows us to be confident of the response probability for each triplet. Furthermore, it allows us to establish a model-free estimate of the Bayes accuracy, the best possible accuracy achievable by any model.
3.2 EXPERIMENTAL SETUP
Training We implemented both SPoSE and VICE in PyTorch (Paszke et al., 2019) using Adam (Kingma & Ba, 2015) with α = 0.001. To guarantee a fair comparison between VICE and SPoSE, each model configuration was trained using 20 different random seeds, for a fixed number of 1000 epochs. Each model was initialized with a weight matrix, W ∈ RD×N , where D was set to 100 and N refers to the number of unique items in the dataset (i.e., 1854). In preliminary experiments, we observed that, after pruning, no model was left with a latent space of more than 100 dimensions, which is why we did not consider models with higher initial dimensionality.
Other details Please see section §A.1 for weight initialization and hyperparameter tuning.
3.3 PREDICTION EXPERIMENTS
3.3.1 EVALUATION MEASURES
Prediction accuracy Since human triplet choices are represented as three-dimensional one-hotvectors, where 1 represents the odd-one-out choice for a particular triplet, it is simple to compare them with model choices. The choice of a model is computed as argmax p(ŷ|θ), where p(ŷ|θ) refers to a model’s softmax probability distribution over a triplet given the model parameters (see Equation 9). If there is a tie in the softmax output, we regard this as an incorrect choice. A model can either be correct or incorrect, and no partial credit is given, guaranteeing a conservative measure of a model’s prediction behavior. The reported prediction accuracy is the fraction of trials where the model predicted the correct odd-one-out item. We can get an estimated upper bound on the Bayes accuracy, i.e., the best possible accuracy of any model, by using the repeats in the independent test set. As the optimal model predicts the repeat majority outcome for any triplet, this accuracy ceiling – 0.673 – is the average probability of the majority outcome over the set of all triplets.
Predicting Human Uncertainty The triplet task is subjective: there is no correct answer to any given triplet, and often subjects give all three. The independent test set gives us the probability distribution over answers for each triplet, graded information about the relative similarities of the three item pairs. Predicting this distribution precisely is a more stringent test of model quality than prediction accuracy, and of even more relevance in cognitive science applications. We quantify this through the KL divergence between the softmax probabilities of a model (see Section 2.3.3) and the empirical human probability distributions, obtained by computing discrete probability distributions for triplet repeats on the independent test set (see Section 3.1). We use the KL divergence because it is a commonly used measure for assessing the similarity between two probability distributions.
3.3.2 EXPERIMENT RESULTS
Full dataset We compared pruned median models of VICE and SPoSE, where the median model was identified by the median cross-entropy error on the tuning set. For VICE, we set the number of MC samples to m = 50 (see Equation 9) . On the independent test set, VICE and SPoSE achieved a similarly high prediction accuracy of 0.6380 and 0.6378, respectively, versus a chance-level accuracy of 0.333(3). Likewise, VICE and SPoSE achieved similar KL-divergences of 0.103 and 0.105, respectively, versus a chance-level KL-divergence of 0.366. The differences between the median model predictions across individual triplets in the test set were not statistically significant, under the null hypothesis according to a two-sided paired t-test, for either accuracy or KL-divergence. Hence, VICE and SPoSE predicted triplets equally well when they were both trained on the full dataset. This is not surprising, as Bayesian methods based on MC sampling become more like deterministic Maximum Likelihood Estimation (MLE) the more training data is available, per the Bernstein-von Mises theorem (Doob, 1949). As a result, the effects of the prior are most prominent when models are trained on datasets where ntrain is not particularly large, as we will see in the next section.
Efficiency on smaller datasets Performance on small datasets is especially important in cognitive science, where behavioral experiments often have low sample sizes (e.g. tens to hundreds of volunteer
in-lab subjects ) or can be costly to scale in AMT. To test whether VICE can model the data better than SPoSE when data are scarce, we created non-overlapping subsets of the training dataset. Specifically, we did this for subsets with sizes equal to 5%, 10%, 20%, and 50% of the dataset, yielding 20, 10, 5, and 2 subsets, respectively. Validation and test sets were unchanged. In Figure 3, we show the average prediction accuracy and KL divergence across random seeds, for models trained on every dataset size, including the full training set. Averages were computed across both random seeds and training subsets, where the average over random seeds was identified first to get a per-subset estimate; performance across subsets was then used to compute the confidence intervals (CIs). Figure 3 shows that the difference in prediction accuracy and KL divergence between VICE and SPoSE became more pronounced the fewer triplet samples were used for training. The difference was striking for the 5% and 10% data subsets, with ≈ 67, 500 and ≈ 135, 000 triplets, respectively. In the former, SPoSE predicted at chance-level; in the latter, it showed a large variation between random seeds and data splits, as can be seen in the 95% CIs in Figure 3. In both low-resource scenarios, VICE showed a compellingly small variation in the two performance metrics across random seeds, and predicted much better than chance-level. The differences between VICE and SPoSE for the 5% and 10% subsample scenarios were statistically significant according to a two-sided paired t-test (p < 0.001), comparing individual triplet predictions between the pruned median models.
3.4 REPRODUCIBILITY EXPERIMENTS
Beyond predictive performance, a key criterion for learning concept representations is reproducibility, i.e., learning similar representations when using different random initializations on the same training data. To assess this, we compare 20 differently initialized VICE and SPoSE models.
The first aspect of reproducibility is finding similar numbers of dimensions, quantified as the standard deviation of that number across all 20 models. As shown in Table 1, VICE identified fewer dimensions than SPoSE, and this had a lower standard deviation across models (1.64 vs 2.30) The difference in standard deviation is, however, not statistically significant according to a two-sided F-test (F = 0.516, df = 19, p = 0.918). The second aspect is the extent to which the dimensions identified are similar across initializations. Since the embedding is not an ordered set of dimensions, we will deem a dimension learned in one VICE model reproducible if it is present in another independently trained instance of VICE, perhaps in a different column or with some small perturbation to the weights. To evaluate the number of highly reproducible dimensions we correspond each embedding dimension of a given initialization (after the pruning step) with the most similar embedding dimension (in terms of Pearson correlation) of a second initialization. Given 20 differently initialized models, we quantify reproducibility of a dimension as the average Pearson correlation between one dimension and its best match across the 19 remaining models. In Table 1, we report the average number of dimensions with a Pearson correlation > 0.8 across the 20 initializations. Selected dimensions are similarly reproducible between VICE and SPoSE (see Table 1). Finally, we investigated whether our uncertainty-based pruning procedure selects reproducible dimensions. We compared the average reproducibility of selected dimensions with the average reproducibility of pruned dimensions, which are discarded by the procedure The average reproducibility of the best subset, i.e., % of dimensions with Pearson’s r > 0.8, is 79.00%, whereas that of the dimensions that were pruned was 0.00%, i.e. our procedure is highly accurate at identifying those dimensions that reproduce reliably.
3.5 INTERPRETABILITY
One of the benefits of SPoSE is the interpretability of the dimensions of its concept embeddings, induced by sparsity and positivity constraints, and empirically tested through experiments in Hebart et al. (2020). VICE constrains the embeddings to be sparse through the spike-and-slab prior, and imposes a non-negativity constraint through applying a rectifier on its latent representations. This
means that, just as in SPoSE, it is easy to sort objects within a VICE dimension by their absolute weight values in descending order, to obtain human judgments of what a dimension represents. In Figure 4, we show the top six objects for four example VICE dimensions of the pruned median model, representing categorical, functional, structural, and visual information. In Appendix A.2 we show the top ten objects for every VICE dimension. Redoing the SPoSE dimension labeling experiments is beyond the scope of this paper, but we provide dimension labels from a small survey in the Appendix.
4 DISCUSSION
In this paper, we introduced VICE, a novel approach for embedding concepts in a non-negative, sparse space, and using those embeddings to predict human behavior in an odd-one-out task. We solve the same problem as an existing method, SPoSE, but using variational inference and a spikeand-slab prior, which is more appropriate for this modeling situation. VICE yields uncertainty information about the solution, enabling a statistical procedure to automatically determine the number of embedding dimensions, as opposed to the data-dependent heuristics that were used in SPoSE. VICE performs as well as SPoSE in terms of accurately predicting human decisions in an odd-one-out task and modeling the probability distribution over those decisions, but using fewer dimensions. However, this is the case only for the large dataset that was originally used to develop SPoSE. VICE performs substantially better than SPoSE on smaller datasets. Moreover, VICE is more stable than SPoSE, as the dimensionality of the embeddings varies less across random initializations. We believe these improvements stem from the combination of the prior and the dimension selection procedure.
We developed VICE with the goal of making it easier to build interpretable embedding spaces to model any type of item, by using an odd-one-out task. We require fewer participants than SPoSE due to higher data efficiency, which makes behavioral experiments more feasible. Our procedure for determining the number of dimensions aims at removing subjectivity, as the scientific motivation often is to discover the minimum number of latent factors required to describe observations. Note that while the user does need to choose the FDR tolerance q, this can be done before looking at any data, based on their degree of conservatism with regards to controlling false discoveries. Hence, it does not cause the problems associated with data-dependent tuning parameters such as regularization parameters or absolute thresholds. Finally, the spike-and-slab prior we use in VICE has intrinsically meaningful hyperparameters, which makes it easier for researchers to specify competing hypotheses about the representation space being studied.
A APPENDIX
A.1 EXPERIMENTAL SETUP
Weight initialization We initialized the weights of the encoder for the means of the distributions, Wµ, following a Kaiming He initialization (He et al., 2015). The weights of the encoder for the logarithm of the scales of the distributions, Wlog (σ), were initialized with = − 1sW0µ , such that W 0log(σ) = 1. This initialization allowed us to avoid bias terms within the linear transformations of the encoders, and additionally ensured, through computing σ = exp (log (σ)), that σ is a small continuous number in R+ at the beginning of training.
Hyperparameter grid To find the optimal VICE hyperparameter combination, we performed a grid search over π, σspike, σslab (see Equation 8). The final grid was the Cartesian product of these parameter sets: π = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, σspike = {0.125, 0.25, 0.5, 1.0, 2.0}, σslab = {0.25, 0.5, 1.0, 2.0, 4.0, 8.0}, subject to the constraint σspike << σslab, where combinations that did not satisfy the constraint were discarded. We observed that setting σslab > 8.0 led to numerical overflow issues during optimization, which is why 23 was the upper-bound for σslab. For SPoSE, we use the same range as Zheng et al. (2019), with a finer grid of 64 values.
Optimal hyperparameters We found the optimal VICE hyperparameter combination through a two step procedure, for which the validation set was split into equally sized pruning and tuning sets. First, among the final 180 combinations (see Cartesian product above), we applied our pruning method (see Section 2.3.4) to each model and kept the subsets of dimensions that led to the lowest cross-entropy error on one half of the validation set, which we call pruning set. Second, we evaluated each model with pruned parameters on the other half of the validation set, which we refer to as tuning set. We defined the optimal hyperparameter combination as that with the lowest average cross-entropy error on the tuning set across twenty different random initializations. The optimal hyperparameter combinations for VICE were σspike = 0.125, σslab = 1.0, π = 0.4; for SPoSE, it was λ = 5.75.
A.2 OBJECT DIMENSIONS
Here, we display the top 10 objects for each of the 47 VICE dimensions, according to their absolute weight value. As we have done for every other experiment, we used the pruned median model to guarantee the extraction of a representative sample of object dimensions without being over-optimistic with respect to their interpretability (see Section 3.3.1 for how the median model was identified). For each dimension we collected human responses in a small survey with a sample of convenience (n = 9). The labels that are shown below each object dimension represent the most common answer across human responses, when they were asked to name the respective dimension. More than one label is displayed, whenever there was a tie in the most common response. Labels were edited for coherence across similar answers (e.g. "metallic" and "made of metal" were deemed to be the same answer).
While the illustrations for each dimension display the top 10 items, for our survey, in order to avoid biasing our results, we actually show a continuum of items selected from bins centered around the 25, 50, and 75 percentiles in addition to top items, and a random set of items with close to zero weight in the dimension.
METALLIC FOOD
PLANTS ANIMAL
HOME CLOTHES
OUTDOOR WOOD; MADE OF WOOD
POINTY; ELONGATED BODY PARTS
VEHICLE; TRANSPORTATION EXQUISITE; TRADITIONAL
ELECTRONIC COLORFUL
ROUND; CIRCULAR MANY OBJECTS; COLLECTION
STATIONERY; OFFICE SPORTS; GAMES
DECORATIVE; BEAUTIFUL CONTAINER; DRINKS
MARINE; WATER RED
BATHROOM; HYGIENE WAR; WEAPON
BLACK DUST; GRAINY TEXTURE
SPHERICAL; ROUND GREEN
WHITE SKY; FLYING
FLOOR; PATTERN LINES; GRATING PATTERN
MUSIC; SOUND SKY; TALL
INSECTS; PESTS N/A
FIRE; SMOKE FOOT; FOOTWEAR
CHAIN; ROPE; STRAND YELLOW; ORANGE
EYEWEAR; EYES; FACE SPIKY; HAIRY
CYLINDRICAL; ELONGATED STRINGY; FIBROUS
BABY; CHILDREN MEDICAL; HEALTHCARE
ICE; COLD | 1. What are the strengths and weaknesses of the proposed model (VICE) compared to the previous model (SPoSE)?
2. How does the paper address the issue of L1 penalization in SPoSE?
3. Can you explain the truncated Gaussian used in the variational Bayes method and its interpretation?
4. How does the paper assess the non-zero embedding mean, and what does it mean for an item to have a high p-value for all dimensions?
5. Is the choice of pruning by cluster in the dimensionality reduction method arbitrary, and is there a theoretical rationale behind it? | Summary Of The Paper
Review | Summary Of The Paper
The paper address a problem in cognitive science: identify the embedding of objects in the brain's semantic space based on the subjective judgments of objects' similarity (odd-one-out task). It proposes a new model (VICE) to address a few issues of the previous model (SPoSE). The new method uses a different form of prior (spike-and-slab) for the embedding, variational Bayes with Gaussian posterior, and appear to perform as well as or better than SPoSE in several metrics
Review
The topic addressed by the paper is an interesting and important one for cognitive science. It is nice that the paper pointed out a few limitations of the previous model, for example, the L1 penalization may not match the actual distribution. The experiments generally appear solid. The better performance with small training dataset as shown in Fig 3 is quite attractive.
But I also find the performance to be comparable to SPoSE in most cases and only better in a few aspects.
Other major issues: I did not really get how 2.3.5.1 was done. Since you mentioned mu and sigma, I suppose they refer to the approximate posterior distribution in the variational bayes. But I did not get how you actually truncate it. where is the truncated Gaussian used? Is it only used for calculating p-value? Because this p-value has different interpretation than the one typically used in classical hypothesis testing, I recommend you make a bit more explanation of what probability the p actually indicate. I think such hypothesis testing require the samples to be independent from each other. But because of the prior introduced, the posterior mean is biased. Does it still make sense to use such a p-value to assess non-zero embedding mean? Also, what does it mean to be "predictive" in this paragraph?
It seems also conceptually strange for any items with mean close to zero in all dimensions. What does the representation mean, if the p-value is high for such item over all dimensions? It sounds like the object would be treated as "noise"?
It was criticized that the criterion for determining how many dimensions the solution should have in SPoSE is heuristic and subjective, but I find that allowing users to choose q value in FDR to determine dimension importance also subjective.
2.3.5.3 and 2.3.5.4: The choice of pruning by cluster sounds quite ad hoc. The purpose is only for reducing variance. But is there any theoretical rationale to justify this? GMM assumes each component is a latent cause. But why would dimensions of similar importance to be of the same latent cause and share the same fate of being either retained or pruned? |
ICLR | Title
VICE: Variational Inference for Concept Embeddings
Abstract
In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
1 INTRODUCTION AND RELATED WORK
Human knowledge about object concepts encompasses many types of information, ranging from function to visual appearance, as well as encyclopedic facts or taxonomic characteristics. This knowledge supports the identification of objects, inferences about what interactions they support, or what the effects of such interactions in the environment will be. Key questions for cognitive scientists modelling human performance in experiments are 1) which of this information is accessible to participants and 2) how is it used across different tasks. Several studies (McRae et al., 2005; Devereux et al., 2013; Buchanan et al., 2019; Hovhannisyan et al., 2021) have asked subjects to list properties for hundreds to thousands of objects, yielding thousands of answers about the types of information above. Properties exist at many levels, ranging from categorization (e.g. "is an animal") to very specific facts (e.g. "is eaten in France"). Objects are implicitly represented as a vector of binary properties. This approach is agnostic to downstream prediction tasks, but does not provide an indication of which properties are more important – other than frequency of listing – and does not allow for graded property values. An alternative approach is for researchers to postulate dimensions of interest, and then ask human subjects to place each object in each dimension. An example is Binder et al. (2016), who collected ratings for hundreds of objects, as well as verbs and adjectives, in 65 dimensions reflecting sensory, motor, spatial, temporal, affective, social, and cognitive experiences.
The overall problem is then one of discovering a representation for objects that is not biased by a particular task, and is interpretable without requiring researchers to postulate the types of information represented. Several researchers have tried to develop interpretable concept representation spaces from text corpora, via word embeddings with positivity and sparsity constraints (Murphy et al., 2012), topic model representations of Wikipedia articles about objects (Pereira et al., 2013), transformations of word embeddings into sparse, positive spaces (Subramanian et al., 2018; Panigrahi et al., 2019) or predictions of properties (Devereux et al., 2013) or dimensions (Utsumi, 2020), or text corpora combined with imaging data (Fyshe et al., 2014) or with object images (Derby et al., 2018). Finally, Derby et al. (2019) introduced a neural network mapping the sparse feature space of a semantic property norm to the dense space of a word embedding, identifying informative combinations of properties or allowing ranking of candidate properties for arbitrary words.
Recently, Zheng et al. (2019) and Hebart et al. (2020) introduced SPoSE, a model of the mental representations of 1,854 objects in a 49-dimensional space. The model was derived from a dataset of
1.5M Amazon Mechanical Turk (AMT) judgments of object similarity, where subjects were asked which of a random triplet of objects was the odd one out. The model embedded each object as a vector in a space where each dimension was constrained to be sparse and positive. Triplet judgments were predicted as a function of the similarity between embedding vectors of the three objects considered. The authors showed that these dimensions were predictable as a combination of elementary properties in the Devereux et al. (2013) norm, which often co-occur across many objects. Hebart et al. (2020) further showed that 1) human subjects could coherently label what the dimensions were “about”, ranging from categorical (e.g. is animate, food, drink, building) to functional (e.g. container, tool) or structural (e.g. made of metal or wood, has inner structure). Subjects could also predict what dimension values new objects would have, based on knowing the dimension value for a few other objects. These results suggest that SPoSE captures core object knowledge that subjects use. Navarro & Griffiths (2008) introduced a related method for learning semantic concept embeddings from similarity data, which infers the number of latent dimensions using the Indian Buffet Process (IBP, Griffiths & Ghahramani (2011)), but their approach is not directly applicable to our setting due to reliance on continuous-valued similarity ratings instead of forced-choice behavior. Furthermore, it is known to be challenging to scale the IBP to the number of features and observations considered in our work (Ghahramani, 2013). Roads & Love (2021) introduced a related method for deriving an object embedding from behavior in a 8-rank-2 task. Their method aimed to predict behavior from the embeddings, using active sampling to query subjects with the most informative stimuli. The method was not meant to produce interpretable dimensions, but rather construct object similarity matrix as efficiently as possible.
There is growing interest by cognitive scientists in using SPoSE, as it makes it possible to discover an item representation for any kind of item amenable to an odd-one-out comparison in a triplet task. Furthermore, the combination of positivity and sparsity constraints in each dimension of the representation leads to interpretability by human subjects: no item is represented by every dimension, and most dimensions are present for only a few items. That item representation can then be used within other behavioral prediction models, to make predictions about neuroimaging data, etc.
For this potential to be realized, however, we believe a number of issues with SPoSE should be addressed. The first is the use of an l1 sparsity penalty to promote interpretability of dimensions. l1 achieves sparsity at the cost of unnecessarily shrinking larger values (Belloni & Chernozhukov, 2013). In SPoSE, 6-11 dominant dimensions for an object account for most of the prediction performance; the cost of removing irrelevant dimensions is to potentially make dominant dimensions smaller than they should be, and affect performance. Second, the l1 penalty is analogous to having a Laplace prior over those values. If we consider the distribution of values across objects for the two most important SPoSE dimensions, in Figure 1, we can see that they have a bimodal distribution, with a spike around 0 and a much smaller, wide slab of probability for non-zero values, which is not Laplace. Overcoming this "wrong" prior requires more data than strictly necessary to learn the representation. SPoSE was developed with a dataset that was orders of magnitude larger than what a typical experiment might collect, but it was never tested on smaller datasets. Finally, SPoSE uses a heuristic, subjective criterion for determining how many dimensions the solution should have.
In this paper we introduce VICE, an approach for variational inference of object concept embeddings in a space with interpretable sparse, positive dimensions, which addresses the SPoSE issues identified above. Specifically, we encourage sparsity and small weights by using a spike-and-slab prior. This is more appropriate than a Laplace prior, because importance – the value an object takes in a dimension – is different from relevance – whether the dimension matters for that object – and they can be
controlled separately with a spike-and-slab prior. The prior hyperparameters are meant to be intuitive to a user, and to make it easier to specify hypotheses about dimensional structure. We use variational Bayes both because it is a Bayesian approach, and also because it assumes a unimodal posterior for the loading of each object in each dimension. It also allows a more principled procedure for determining how many dimensions the model should have, by taking into account uncertainty about their values. We compare our model with SPoSE over different subsets of the dataset used to develop it, and verify that it performs as well or better by various criteria: prediction of behavior, calibration of the prediction of decision probabilities, and reproducibility of solutions across seeds. Importantly, it has significantly better performance on smaller datasets (5− 10% of the original SPoSE dataset). Our implementation of VICE is available on GitHub1, and will be de-anonymized upon acceptance.
2 METHODS
2.1 ODD-ONE-OUT TASK
The odd-one-out task is motivated by the problem of discovering object embeddings based on similarity judgments involving a set of m different object concepts, which we will denote by c1, . . . , cm (e.g. c1 = ‘aardvark’,. . . , c1854 = ‘zucchini’). These similarity judgments are collected from human participants, who are given queries which consist of a ‘triplet‘ of three concepts {ci1 , ci2 , ci3}, for instance, {c268, c609, c1581} = {‘suit’, ‘flamingo’, ‘car’}. Participants are asked to consider the three pairs within the triplet {(ci1 , ci2), (ci1 , ci3), (ci2 , ci3)}, and to decide which item had the smallest similarity to the other two (the "odd-one-out"). This is equivalent to choosing the pair with the greatest similarity. Let (y1, y2) denote the indices in this pair, e.g. for ‘suit’ and ‘flamingo’ they would be (y1, y2) = (268, 609). A dataset D is a set of N pairs of concept triplets and one-hot vectors that correspond to the index two most similar concepts, i.e. ({ci1 , ci2 , ci3}, (yi1 , yi2)).
2.2 SPOSE
Sparse Positive object Similarity Embedding (SPoSE) (Zheng et al., 2019) is an approach for finding interpretable item dimensions from an odd-one-out task. It does so by finding an embedding vector xi = (xi1, . . . , xip) for every item ci. The similarity Sij of two items (e.g. ci and cj) is computed by the dot product of the corresponding embeddings (i.e. xi and xj), Sij = 〈xi,xj〉 From these similarities, the probability of choosing (yi1 , yi2) as the most similar pair of items given the item triplet {ci1 , ci2 , ci3} and given embedding vectors {xi1 , xi2 , xi3} is computed as:
p((yi1 , yi2)|{ci1 , ci2 , ci3}, {xi1 , xi2 , xi3}) = exp(Syi1 ,yi2 )
exp(Si1,i2) + exp(Si1,i3) + exp(Si2,i3) . (1)
SPoSE uses a maximum a posterori (MAP) estimation to find the most likely embedding given the training data and a prior:
argmax X log p(X|Dtrain) = argmax X log p(Dtrain|X) + log p(X), (2)
1Link to anonymous GitHub repository: https://anonymous.4open.science/r/VICE-59F0
where X is a matrix containing the embedding vectors for all of the items and p(X) is a prior for the embeddings, and p(Dtrain,j |X) is defined in (7). To induce sparsity in the embeddings, SPoSE uses a mean-field Laplace prior, leading to this objective:
argmax X ntrain∑ j=1 log p(Dtrain,j |X) + λ m∑ i=1 ||xi||1 (3)
Here, || · ||1 is the l1 norm, so ||x||1 = ∑p f=1 |xf |, and xf ≥ 0 for f = 1, . . . , p. The regularization parameter, λ, is selected out of a grid of candidate values by choosing the one that achieves the lowest (average) cross-entropy on the validation set (across twenty random seeds). The final dimensionality of the embedding, p, is determined heuristically from the data. If p is set to be larger than the number of dimensions supported by the data, the SPoSE algorithm will shrink entire dimensions towards zero by removing weights with a magnitude less than a given absolute threshold. While a threshold of 0.1 is suggested (Zheng et al., 2019), no justification is given for that particular value, which is problematic given that the number of dimensions removed is quite sensitive to that choice.
2.3 VICE
2.3.1 VARIATIONAL BAYESIAN INFERENCE
Given the goal of better approximating p(X|Dtrain), we use variational inference. We approximate p(X|Dtrain) with a variational distribution, qθ(X), where q is our chosen family of distributions, and θ is a parameter that is learned in order to optimize the Kullback–Leibler (KL) divergence to the true posterior, p(X|Dtrain). In variational inference, the KL divergence objective function is:
argmin θ Eqθ(X)
[ 1
ntrain (log qθ(X)− log p(X))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|X)
] (4)
In order to use variational inference, a parametric variational distribution must be chosen. For VICE, we use a Gaussian variational distribution with a diagonal covariance matrix qθ(X) = N (µ, diag(σ2)), where the learnable parameters θ are µ and σ. This means that each embedding dimension has a mean, the most likely value for that dimension, and a standard deviation, the propensity of the embedding value to be close to the mean.
Similarly to Blundell et al. (2015), we use a Monte Carlo (MC) approximation of the above objective function by sampling a limited number of Xs from qµ,σ(X) during training. We generate X by means of the reparameterization trick (Kingma & Welling, 2013), Xθ, = µ+ σ · where is an N × p matrix of standard normal variates, leading to the objective:
argmin θ
1
m m∑ j=1
[ 1
ntrain (log qθ(Xθ, j )− log p(Xθ, j ))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|[Xθ, j ]+ ] (5)
where j ∈ RN×p is entrywise N (0, 1) and where []+ is the ReLU function. As commonly done in the dropout and Bayesian neural network literature (Srivastava et al., 2014; Blundell et al., 2015; Gal & Ghahramani, 2016; McClure & Kriegeskorte, 2016), we set m to 1 for computational efficiency.
In Equation 5, the expected log-likelihood of the entire training data is computed. However, using the entire training data set to compute the gradient update often works poorly for non-convex objective functions. This is due to the expensive computational cost of each update and to the convergence to poorly generalizing solutions (Smith et al., 2020). As a result, we stochastically approximate (Robbins & Monro, 1951) the training log-likelihood using random subsets (i.e. mini-batches) of the training dataset, with each mini-batch consisting of b triplets. This leads to the final objective
argmin θ
1
ntrain (log qθ(Xθ, )− log p(Xθ, ))−
1
b b∑ i=1 log p(Dtrain,i|[Xθ, ]+)] (6)
recalling that
p(Dtrain,i|X) = exp(xy1,i
Txy2,i)
exp(xi1,i Txi2,i) + exp(xi1,i Txi3,i) + exp(xi2,j Txi3,i)
. (7)
2.3.2 SPIKE-AND-SLAB PRIOR
A key feature of SPoSE is sparsity. As discussed above, SPoSE induced sparsity using a zero-mean Laplace prior. We can empirically examine whether the Laplace prior is a realistic assumption, given the distribution of values in SPoSE dimensions. As Figure 1 depicts, these histograms do not resemble a Laplace distribution. Instead, it looks like there is a "spike" of probability at zero and a much smaller, but wide, "slab" of probability for the non-zero values of a SPoSE dimension. To model this, we use a spike-and-slab Gaussian mixture prior, as introduced in Blundell et al. (2015):
p(X) = N∏ i=1 p∏ f=1 (πN (xif ; 0, σ2spike) + (1− π)N (xif ; 0, σ2slab)) (8)
This prior has three parameters. π is the probability that an embedding dimension will be drawn from the spike Gaussian instead of the slab Gaussian. The standard deviations σspike and σslab control the likelihood of of an embedding value being set to 0 in the spike or slab distributions, respectively. xif is the embedding weight for the ith item in the f th dimension. Since spike and slab distributions are mathematically interchangeable, by convention we require that, σspike << σslab. In our experiments, these are chosen with grid search on one half of the validation set, the “tuning set”. (The other half of the validation set, the “pruning set”, is used for dimensionality reduction, as we describe in §2.3.4.)
2.3.3 PREDICTING THE ODD-ONE-OUT USING VICE
In this work, we consider two different prediction problems given a new triplet: (1) predicting the choice and (2) predicting the distribution of the choice. In either case, we start with computing the posterior probability distribution over the three triplet choices. If predicting a choice, then we output the choice with the maximum posterior probability. (For details on how we handle ties, see §3.3.1.) If the goal is to predict the distribution, then we return the predicted distribution.
The predicted probability distribution is computed from the variational posterior, qθ(X). When making predictions, we want to compute the probability of an odd-one-out for a given triplet. We approximate this probability by using an MC estimate fromm samplesXj = Xθ, j for j = 1, . . . ,m (Graves, 2011; Blundell et al., 2015; Kingma & Welling, 2014; McClure & Kriegeskorte, 2016; Blei et al., 2017). Mathematically, this means that we compute the predicted distribution as
p̂((yi1 , yi2)|{ci1 , ci2 , ci3}) ≈ 1
m m∑ j=1 p((yi1 , yi2)|{ci1 , ci2 , ci3}, {[x j i1 ]+, [x j i2 ]+, [x j i3 ]+}). (9)
2.3.4 DIMENSIONALITY REDUCTION FOR VICE
For interpretability purposes, it is crucial that the model does not use more dimensions than necessary. Zheng et al. (2019) accomplished this through the sparsity-inducing penalty that causes dimensions to shrink towards zero if they do not contribute to explaining the data. (see Section 2.2). Note, however, that these uninformative weights do not totally go to zero because of noise in the gradients. Hence, these dimensions were pruned by choosing a threshold for the L1 norm of the dimension based on looking at the “elbow plot” of the sorted L1 norms of the dimensions; this approach is subjective and highly dependent on the specific dataset. In VICE, the KL penalty we use has a similar effect of causing uninformative weights to shrink. Rather than using a user-defined threshold to prune dimensions, VICE exploits the uncertainty information obtained in training the model to select a set of informative dimensions. The pruning procedure consists of three steps: (1) assigning an importance score to each dimension; (2) clustering dimensions by importance; and (3) choosing the subset of clusters that best explains the validation set. We describe each of these three steps in detail below.
Assign an importance score to each dimension Intuitively, the importance score reflects the number of objects that we can confidently say have non-zero weight in a dimension. To compute the score, we start by using the variational embedding for each item i – location µij and scale σij parameters, to compute the posterior probability that the weight will be truncated to zero according to the left tail of a Gaussian distribution with that location and scale (as described in §2.3.1). This gives us a posterior probability of the weight taking the value zero for each item within a dimension (Graves, 2011). To calculate the overall importance of a dimension, we estimate the number of items that plausibly have non-zero weights given a user-specified False Discovery Rate target (FDR) (Benjamini & Hochberg, 1995). FDR provides a method for inferring the number of hypotheses which are non-null, based on an array of p-values, with statistical guarantees on the expected proportion of false rejections. We define dimension importance as the number of rejections given by the BH(q) algorithm, with the FDR tolerance q specified by the user, using the posterior zero-probabilities as the p-values.
Cluster dimensions by importance using a Gaussian mixture model Given the importance scores in the previous step, a reasonable approach would be to sort dimensions by importance, and then use the left-out half of the validation set, or “pruning set”, to determine the k most important dimensions to include. However, we found that this approach led to high variance, due to the existence of groups of dimensions with very similar importance scores. We hypothesized that these groups of dimension corresponded to different feature types, as observed in Zheng et al. (2019). As McRae et al. (2005) discusses, these features can be grouped into different feature types, such as categorical, functional, encyclopedic, visual-perceptual and non-visual-perceptual. Therefore, the second step in our pruning method creates clusters of dimensions that have similar importance. We fit GMMs with varied number of components k (e.g. k ∈ {1, 2, ..., 6}) to the importance scores for each dimension, and find the number of components/modes that show the lowest Bayesian Information Criterion (BIC). Here, we limit the number of possible clusters to 6, as a conservative estimate on the number of distinct feature types (e.g. categorical, functional, perceptual) with possibly differing sparsity ranges–i.e., categorical features may apply to a large subset of items, while specific visual features might apply only to a handful. We cluster dimensions into k modes.
Choosing the subset of dimension clusters that best explains the validation set We find the best non-empty subset of clusters of dimensions, in terms of cross-entropy on the validation “pruning” set, and prune all clusters of dimensions outside of this subset. (If a given feature is uninformative, then features with similar importance scores are likely to be similarly uninformative.)
3 EXPERIMENTS
3.1 DATA
We used two datasets from Zheng et al. (2019), selected after quality control. The first contained judgments on 1,450,119 randomly selected triplets. We used a random subsample of 90% of these triplets for the training set, and the remaining 10% for the validation set (tuning and pruning). The second was an independent test set of 19,968 triplets with 25 repeats for each of 1,000 randomly selected triplets; none of these were present in the training set. Having this many repeats allows us to be confident of the response probability for each triplet. Furthermore, it allows us to establish a model-free estimate of the Bayes accuracy, the best possible accuracy achievable by any model.
3.2 EXPERIMENTAL SETUP
Training We implemented both SPoSE and VICE in PyTorch (Paszke et al., 2019) using Adam (Kingma & Ba, 2015) with α = 0.001. To guarantee a fair comparison between VICE and SPoSE, each model configuration was trained using 20 different random seeds, for a fixed number of 1000 epochs. Each model was initialized with a weight matrix, W ∈ RD×N , where D was set to 100 and N refers to the number of unique items in the dataset (i.e., 1854). In preliminary experiments, we observed that, after pruning, no model was left with a latent space of more than 100 dimensions, which is why we did not consider models with higher initial dimensionality.
Other details Please see section §A.1 for weight initialization and hyperparameter tuning.
3.3 PREDICTION EXPERIMENTS
3.3.1 EVALUATION MEASURES
Prediction accuracy Since human triplet choices are represented as three-dimensional one-hotvectors, where 1 represents the odd-one-out choice for a particular triplet, it is simple to compare them with model choices. The choice of a model is computed as argmax p(ŷ|θ), where p(ŷ|θ) refers to a model’s softmax probability distribution over a triplet given the model parameters (see Equation 9). If there is a tie in the softmax output, we regard this as an incorrect choice. A model can either be correct or incorrect, and no partial credit is given, guaranteeing a conservative measure of a model’s prediction behavior. The reported prediction accuracy is the fraction of trials where the model predicted the correct odd-one-out item. We can get an estimated upper bound on the Bayes accuracy, i.e., the best possible accuracy of any model, by using the repeats in the independent test set. As the optimal model predicts the repeat majority outcome for any triplet, this accuracy ceiling – 0.673 – is the average probability of the majority outcome over the set of all triplets.
Predicting Human Uncertainty The triplet task is subjective: there is no correct answer to any given triplet, and often subjects give all three. The independent test set gives us the probability distribution over answers for each triplet, graded information about the relative similarities of the three item pairs. Predicting this distribution precisely is a more stringent test of model quality than prediction accuracy, and of even more relevance in cognitive science applications. We quantify this through the KL divergence between the softmax probabilities of a model (see Section 2.3.3) and the empirical human probability distributions, obtained by computing discrete probability distributions for triplet repeats on the independent test set (see Section 3.1). We use the KL divergence because it is a commonly used measure for assessing the similarity between two probability distributions.
3.3.2 EXPERIMENT RESULTS
Full dataset We compared pruned median models of VICE and SPoSE, where the median model was identified by the median cross-entropy error on the tuning set. For VICE, we set the number of MC samples to m = 50 (see Equation 9) . On the independent test set, VICE and SPoSE achieved a similarly high prediction accuracy of 0.6380 and 0.6378, respectively, versus a chance-level accuracy of 0.333(3). Likewise, VICE and SPoSE achieved similar KL-divergences of 0.103 and 0.105, respectively, versus a chance-level KL-divergence of 0.366. The differences between the median model predictions across individual triplets in the test set were not statistically significant, under the null hypothesis according to a two-sided paired t-test, for either accuracy or KL-divergence. Hence, VICE and SPoSE predicted triplets equally well when they were both trained on the full dataset. This is not surprising, as Bayesian methods based on MC sampling become more like deterministic Maximum Likelihood Estimation (MLE) the more training data is available, per the Bernstein-von Mises theorem (Doob, 1949). As a result, the effects of the prior are most prominent when models are trained on datasets where ntrain is not particularly large, as we will see in the next section.
Efficiency on smaller datasets Performance on small datasets is especially important in cognitive science, where behavioral experiments often have low sample sizes (e.g. tens to hundreds of volunteer
in-lab subjects ) or can be costly to scale in AMT. To test whether VICE can model the data better than SPoSE when data are scarce, we created non-overlapping subsets of the training dataset. Specifically, we did this for subsets with sizes equal to 5%, 10%, 20%, and 50% of the dataset, yielding 20, 10, 5, and 2 subsets, respectively. Validation and test sets were unchanged. In Figure 3, we show the average prediction accuracy and KL divergence across random seeds, for models trained on every dataset size, including the full training set. Averages were computed across both random seeds and training subsets, where the average over random seeds was identified first to get a per-subset estimate; performance across subsets was then used to compute the confidence intervals (CIs). Figure 3 shows that the difference in prediction accuracy and KL divergence between VICE and SPoSE became more pronounced the fewer triplet samples were used for training. The difference was striking for the 5% and 10% data subsets, with ≈ 67, 500 and ≈ 135, 000 triplets, respectively. In the former, SPoSE predicted at chance-level; in the latter, it showed a large variation between random seeds and data splits, as can be seen in the 95% CIs in Figure 3. In both low-resource scenarios, VICE showed a compellingly small variation in the two performance metrics across random seeds, and predicted much better than chance-level. The differences between VICE and SPoSE for the 5% and 10% subsample scenarios were statistically significant according to a two-sided paired t-test (p < 0.001), comparing individual triplet predictions between the pruned median models.
3.4 REPRODUCIBILITY EXPERIMENTS
Beyond predictive performance, a key criterion for learning concept representations is reproducibility, i.e., learning similar representations when using different random initializations on the same training data. To assess this, we compare 20 differently initialized VICE and SPoSE models.
The first aspect of reproducibility is finding similar numbers of dimensions, quantified as the standard deviation of that number across all 20 models. As shown in Table 1, VICE identified fewer dimensions than SPoSE, and this had a lower standard deviation across models (1.64 vs 2.30) The difference in standard deviation is, however, not statistically significant according to a two-sided F-test (F = 0.516, df = 19, p = 0.918). The second aspect is the extent to which the dimensions identified are similar across initializations. Since the embedding is not an ordered set of dimensions, we will deem a dimension learned in one VICE model reproducible if it is present in another independently trained instance of VICE, perhaps in a different column or with some small perturbation to the weights. To evaluate the number of highly reproducible dimensions we correspond each embedding dimension of a given initialization (after the pruning step) with the most similar embedding dimension (in terms of Pearson correlation) of a second initialization. Given 20 differently initialized models, we quantify reproducibility of a dimension as the average Pearson correlation between one dimension and its best match across the 19 remaining models. In Table 1, we report the average number of dimensions with a Pearson correlation > 0.8 across the 20 initializations. Selected dimensions are similarly reproducible between VICE and SPoSE (see Table 1). Finally, we investigated whether our uncertainty-based pruning procedure selects reproducible dimensions. We compared the average reproducibility of selected dimensions with the average reproducibility of pruned dimensions, which are discarded by the procedure The average reproducibility of the best subset, i.e., % of dimensions with Pearson’s r > 0.8, is 79.00%, whereas that of the dimensions that were pruned was 0.00%, i.e. our procedure is highly accurate at identifying those dimensions that reproduce reliably.
3.5 INTERPRETABILITY
One of the benefits of SPoSE is the interpretability of the dimensions of its concept embeddings, induced by sparsity and positivity constraints, and empirically tested through experiments in Hebart et al. (2020). VICE constrains the embeddings to be sparse through the spike-and-slab prior, and imposes a non-negativity constraint through applying a rectifier on its latent representations. This
means that, just as in SPoSE, it is easy to sort objects within a VICE dimension by their absolute weight values in descending order, to obtain human judgments of what a dimension represents. In Figure 4, we show the top six objects for four example VICE dimensions of the pruned median model, representing categorical, functional, structural, and visual information. In Appendix A.2 we show the top ten objects for every VICE dimension. Redoing the SPoSE dimension labeling experiments is beyond the scope of this paper, but we provide dimension labels from a small survey in the Appendix.
4 DISCUSSION
In this paper, we introduced VICE, a novel approach for embedding concepts in a non-negative, sparse space, and using those embeddings to predict human behavior in an odd-one-out task. We solve the same problem as an existing method, SPoSE, but using variational inference and a spikeand-slab prior, which is more appropriate for this modeling situation. VICE yields uncertainty information about the solution, enabling a statistical procedure to automatically determine the number of embedding dimensions, as opposed to the data-dependent heuristics that were used in SPoSE. VICE performs as well as SPoSE in terms of accurately predicting human decisions in an odd-one-out task and modeling the probability distribution over those decisions, but using fewer dimensions. However, this is the case only for the large dataset that was originally used to develop SPoSE. VICE performs substantially better than SPoSE on smaller datasets. Moreover, VICE is more stable than SPoSE, as the dimensionality of the embeddings varies less across random initializations. We believe these improvements stem from the combination of the prior and the dimension selection procedure.
We developed VICE with the goal of making it easier to build interpretable embedding spaces to model any type of item, by using an odd-one-out task. We require fewer participants than SPoSE due to higher data efficiency, which makes behavioral experiments more feasible. Our procedure for determining the number of dimensions aims at removing subjectivity, as the scientific motivation often is to discover the minimum number of latent factors required to describe observations. Note that while the user does need to choose the FDR tolerance q, this can be done before looking at any data, based on their degree of conservatism with regards to controlling false discoveries. Hence, it does not cause the problems associated with data-dependent tuning parameters such as regularization parameters or absolute thresholds. Finally, the spike-and-slab prior we use in VICE has intrinsically meaningful hyperparameters, which makes it easier for researchers to specify competing hypotheses about the representation space being studied.
A APPENDIX
A.1 EXPERIMENTAL SETUP
Weight initialization We initialized the weights of the encoder for the means of the distributions, Wµ, following a Kaiming He initialization (He et al., 2015). The weights of the encoder for the logarithm of the scales of the distributions, Wlog (σ), were initialized with = − 1sW0µ , such that W 0log(σ) = 1. This initialization allowed us to avoid bias terms within the linear transformations of the encoders, and additionally ensured, through computing σ = exp (log (σ)), that σ is a small continuous number in R+ at the beginning of training.
Hyperparameter grid To find the optimal VICE hyperparameter combination, we performed a grid search over π, σspike, σslab (see Equation 8). The final grid was the Cartesian product of these parameter sets: π = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, σspike = {0.125, 0.25, 0.5, 1.0, 2.0}, σslab = {0.25, 0.5, 1.0, 2.0, 4.0, 8.0}, subject to the constraint σspike << σslab, where combinations that did not satisfy the constraint were discarded. We observed that setting σslab > 8.0 led to numerical overflow issues during optimization, which is why 23 was the upper-bound for σslab. For SPoSE, we use the same range as Zheng et al. (2019), with a finer grid of 64 values.
Optimal hyperparameters We found the optimal VICE hyperparameter combination through a two step procedure, for which the validation set was split into equally sized pruning and tuning sets. First, among the final 180 combinations (see Cartesian product above), we applied our pruning method (see Section 2.3.4) to each model and kept the subsets of dimensions that led to the lowest cross-entropy error on one half of the validation set, which we call pruning set. Second, we evaluated each model with pruned parameters on the other half of the validation set, which we refer to as tuning set. We defined the optimal hyperparameter combination as that with the lowest average cross-entropy error on the tuning set across twenty different random initializations. The optimal hyperparameter combinations for VICE were σspike = 0.125, σslab = 1.0, π = 0.4; for SPoSE, it was λ = 5.75.
A.2 OBJECT DIMENSIONS
Here, we display the top 10 objects for each of the 47 VICE dimensions, according to their absolute weight value. As we have done for every other experiment, we used the pruned median model to guarantee the extraction of a representative sample of object dimensions without being over-optimistic with respect to their interpretability (see Section 3.3.1 for how the median model was identified). For each dimension we collected human responses in a small survey with a sample of convenience (n = 9). The labels that are shown below each object dimension represent the most common answer across human responses, when they were asked to name the respective dimension. More than one label is displayed, whenever there was a tie in the most common response. Labels were edited for coherence across similar answers (e.g. "metallic" and "made of metal" were deemed to be the same answer).
While the illustrations for each dimension display the top 10 items, for our survey, in order to avoid biasing our results, we actually show a continuum of items selected from bins centered around the 25, 50, and 75 percentiles in addition to top items, and a random set of items with close to zero weight in the dimension.
METALLIC FOOD
PLANTS ANIMAL
HOME CLOTHES
OUTDOOR WOOD; MADE OF WOOD
POINTY; ELONGATED BODY PARTS
VEHICLE; TRANSPORTATION EXQUISITE; TRADITIONAL
ELECTRONIC COLORFUL
ROUND; CIRCULAR MANY OBJECTS; COLLECTION
STATIONERY; OFFICE SPORTS; GAMES
DECORATIVE; BEAUTIFUL CONTAINER; DRINKS
MARINE; WATER RED
BATHROOM; HYGIENE WAR; WEAPON
BLACK DUST; GRAINY TEXTURE
SPHERICAL; ROUND GREEN
WHITE SKY; FLYING
FLOOR; PATTERN LINES; GRATING PATTERN
MUSIC; SOUND SKY; TALL
INSECTS; PESTS N/A
FIRE; SMOKE FOOT; FOOTWEAR
CHAIN; ROPE; STRAND YELLOW; ORANGE
EYEWEAR; EYES; FACE SPIKY; HAIRY
CYLINDRICAL; ELONGATED STRINGY; FIBROUS
BABY; CHILDREN MEDICAL; HEALTHCARE
ICE; COLD | 1. What is the focus and contribution of the paper on the odd-one-out task?
2. What are the strengths of the proposed approach, particularly in its application of variational models?
3. What are the weaknesses of the paper regarding its limited significance and lack of generalization verification?
4. How does the reviewer assess the relevance and novelty of the paper's content compared to other works in the field?
5. Are there any concerns about the representation learning approaches used in the paper, and how do they compare to other methods in natural language processing? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a prediction model for the odd-one-out task [Zheng 2019], where the goal is to identify which pair people find to be the most similar within the triplet of pictures. The proposed approach learns a variational model to approximate the distribution over the triplet. The model is compared to SPoSE [Zheng 2019] and shown to outperform on small dataset sizes.
Review
Strengths
The paper clearly describes the main idea of applying variational approach to the concerned odd-one-out task. Compared to SPoSE, the proposed approach introduces a stochastic process to explicitly consider uncertainty, and switch the prior distribution to Gaussian mixture for better fitting to the data. The motivation seems clear and the approach looks reasonable.
Weaknesses
The paper has limited significance due to the narrow focus on improvement to SPoSE [Zheng 2019]. The paper seems to only concern the benchmark performance in the single dataset of [Zheng 2019], which might be simply overfitting the model to a specific dataset. Even though the results indicate successful in improving in smaller datasets (Fig 3), I do not understand how significant the result is without any generalization verification. In this sense, the paper fails to convey whether if the proposed approach has significant technical contribution.
There seems a recent work on proposing alternative benchmark.
B Roads and B Love, Enriching ImageNet With Human Similarity Judgments and Psychological Embeddings, CVPR 2021
It might be the case that there is a scientific value in improving the benchmark performance of the odd-one-out dataset [Zheng 2019], but as a reviewer, I do not have any background in judging the statement in Sec 1 saying “growing interest by cognitive scientists using SPoSE”.
On learning embedding representations, there are different attempts other than Gaussian.
L Vilnis et al, Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures, ACL 2018
O Ganea et al, Hyperbolic Entailment Cones for Learning Hierarchical Embeddings, ICML 2018
Although I do not deeply understand the cognitive science context, I feel this paper misses the formal discussion in terms of representation learning in the natural language processing community. Even the simplest comparison done in [Zheng 2019] to NLP baselines (synset or NNSE) are missing in this work.
There seems somewhat recent relevant work.
A Laverghetta et al, Can Transformer Language Models Predict Psychometric Properties?, STARSEM2021
S Derby et al, Feature2Vec: Distributional semantic modelling of human property knowledge, EMNLP 2019 |
ICLR | Title
VICE: Variational Inference for Concept Embeddings
Abstract
In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
1 INTRODUCTION AND RELATED WORK
Human knowledge about object concepts encompasses many types of information, ranging from function to visual appearance, as well as encyclopedic facts or taxonomic characteristics. This knowledge supports the identification of objects, inferences about what interactions they support, or what the effects of such interactions in the environment will be. Key questions for cognitive scientists modelling human performance in experiments are 1) which of this information is accessible to participants and 2) how is it used across different tasks. Several studies (McRae et al., 2005; Devereux et al., 2013; Buchanan et al., 2019; Hovhannisyan et al., 2021) have asked subjects to list properties for hundreds to thousands of objects, yielding thousands of answers about the types of information above. Properties exist at many levels, ranging from categorization (e.g. "is an animal") to very specific facts (e.g. "is eaten in France"). Objects are implicitly represented as a vector of binary properties. This approach is agnostic to downstream prediction tasks, but does not provide an indication of which properties are more important – other than frequency of listing – and does not allow for graded property values. An alternative approach is for researchers to postulate dimensions of interest, and then ask human subjects to place each object in each dimension. An example is Binder et al. (2016), who collected ratings for hundreds of objects, as well as verbs and adjectives, in 65 dimensions reflecting sensory, motor, spatial, temporal, affective, social, and cognitive experiences.
The overall problem is then one of discovering a representation for objects that is not biased by a particular task, and is interpretable without requiring researchers to postulate the types of information represented. Several researchers have tried to develop interpretable concept representation spaces from text corpora, via word embeddings with positivity and sparsity constraints (Murphy et al., 2012), topic model representations of Wikipedia articles about objects (Pereira et al., 2013), transformations of word embeddings into sparse, positive spaces (Subramanian et al., 2018; Panigrahi et al., 2019) or predictions of properties (Devereux et al., 2013) or dimensions (Utsumi, 2020), or text corpora combined with imaging data (Fyshe et al., 2014) or with object images (Derby et al., 2018). Finally, Derby et al. (2019) introduced a neural network mapping the sparse feature space of a semantic property norm to the dense space of a word embedding, identifying informative combinations of properties or allowing ranking of candidate properties for arbitrary words.
Recently, Zheng et al. (2019) and Hebart et al. (2020) introduced SPoSE, a model of the mental representations of 1,854 objects in a 49-dimensional space. The model was derived from a dataset of
1.5M Amazon Mechanical Turk (AMT) judgments of object similarity, where subjects were asked which of a random triplet of objects was the odd one out. The model embedded each object as a vector in a space where each dimension was constrained to be sparse and positive. Triplet judgments were predicted as a function of the similarity between embedding vectors of the three objects considered. The authors showed that these dimensions were predictable as a combination of elementary properties in the Devereux et al. (2013) norm, which often co-occur across many objects. Hebart et al. (2020) further showed that 1) human subjects could coherently label what the dimensions were “about”, ranging from categorical (e.g. is animate, food, drink, building) to functional (e.g. container, tool) or structural (e.g. made of metal or wood, has inner structure). Subjects could also predict what dimension values new objects would have, based on knowing the dimension value for a few other objects. These results suggest that SPoSE captures core object knowledge that subjects use. Navarro & Griffiths (2008) introduced a related method for learning semantic concept embeddings from similarity data, which infers the number of latent dimensions using the Indian Buffet Process (IBP, Griffiths & Ghahramani (2011)), but their approach is not directly applicable to our setting due to reliance on continuous-valued similarity ratings instead of forced-choice behavior. Furthermore, it is known to be challenging to scale the IBP to the number of features and observations considered in our work (Ghahramani, 2013). Roads & Love (2021) introduced a related method for deriving an object embedding from behavior in a 8-rank-2 task. Their method aimed to predict behavior from the embeddings, using active sampling to query subjects with the most informative stimuli. The method was not meant to produce interpretable dimensions, but rather construct object similarity matrix as efficiently as possible.
There is growing interest by cognitive scientists in using SPoSE, as it makes it possible to discover an item representation for any kind of item amenable to an odd-one-out comparison in a triplet task. Furthermore, the combination of positivity and sparsity constraints in each dimension of the representation leads to interpretability by human subjects: no item is represented by every dimension, and most dimensions are present for only a few items. That item representation can then be used within other behavioral prediction models, to make predictions about neuroimaging data, etc.
For this potential to be realized, however, we believe a number of issues with SPoSE should be addressed. The first is the use of an l1 sparsity penalty to promote interpretability of dimensions. l1 achieves sparsity at the cost of unnecessarily shrinking larger values (Belloni & Chernozhukov, 2013). In SPoSE, 6-11 dominant dimensions for an object account for most of the prediction performance; the cost of removing irrelevant dimensions is to potentially make dominant dimensions smaller than they should be, and affect performance. Second, the l1 penalty is analogous to having a Laplace prior over those values. If we consider the distribution of values across objects for the two most important SPoSE dimensions, in Figure 1, we can see that they have a bimodal distribution, with a spike around 0 and a much smaller, wide slab of probability for non-zero values, which is not Laplace. Overcoming this "wrong" prior requires more data than strictly necessary to learn the representation. SPoSE was developed with a dataset that was orders of magnitude larger than what a typical experiment might collect, but it was never tested on smaller datasets. Finally, SPoSE uses a heuristic, subjective criterion for determining how many dimensions the solution should have.
In this paper we introduce VICE, an approach for variational inference of object concept embeddings in a space with interpretable sparse, positive dimensions, which addresses the SPoSE issues identified above. Specifically, we encourage sparsity and small weights by using a spike-and-slab prior. This is more appropriate than a Laplace prior, because importance – the value an object takes in a dimension – is different from relevance – whether the dimension matters for that object – and they can be
controlled separately with a spike-and-slab prior. The prior hyperparameters are meant to be intuitive to a user, and to make it easier to specify hypotheses about dimensional structure. We use variational Bayes both because it is a Bayesian approach, and also because it assumes a unimodal posterior for the loading of each object in each dimension. It also allows a more principled procedure for determining how many dimensions the model should have, by taking into account uncertainty about their values. We compare our model with SPoSE over different subsets of the dataset used to develop it, and verify that it performs as well or better by various criteria: prediction of behavior, calibration of the prediction of decision probabilities, and reproducibility of solutions across seeds. Importantly, it has significantly better performance on smaller datasets (5− 10% of the original SPoSE dataset). Our implementation of VICE is available on GitHub1, and will be de-anonymized upon acceptance.
2 METHODS
2.1 ODD-ONE-OUT TASK
The odd-one-out task is motivated by the problem of discovering object embeddings based on similarity judgments involving a set of m different object concepts, which we will denote by c1, . . . , cm (e.g. c1 = ‘aardvark’,. . . , c1854 = ‘zucchini’). These similarity judgments are collected from human participants, who are given queries which consist of a ‘triplet‘ of three concepts {ci1 , ci2 , ci3}, for instance, {c268, c609, c1581} = {‘suit’, ‘flamingo’, ‘car’}. Participants are asked to consider the three pairs within the triplet {(ci1 , ci2), (ci1 , ci3), (ci2 , ci3)}, and to decide which item had the smallest similarity to the other two (the "odd-one-out"). This is equivalent to choosing the pair with the greatest similarity. Let (y1, y2) denote the indices in this pair, e.g. for ‘suit’ and ‘flamingo’ they would be (y1, y2) = (268, 609). A dataset D is a set of N pairs of concept triplets and one-hot vectors that correspond to the index two most similar concepts, i.e. ({ci1 , ci2 , ci3}, (yi1 , yi2)).
2.2 SPOSE
Sparse Positive object Similarity Embedding (SPoSE) (Zheng et al., 2019) is an approach for finding interpretable item dimensions from an odd-one-out task. It does so by finding an embedding vector xi = (xi1, . . . , xip) for every item ci. The similarity Sij of two items (e.g. ci and cj) is computed by the dot product of the corresponding embeddings (i.e. xi and xj), Sij = 〈xi,xj〉 From these similarities, the probability of choosing (yi1 , yi2) as the most similar pair of items given the item triplet {ci1 , ci2 , ci3} and given embedding vectors {xi1 , xi2 , xi3} is computed as:
p((yi1 , yi2)|{ci1 , ci2 , ci3}, {xi1 , xi2 , xi3}) = exp(Syi1 ,yi2 )
exp(Si1,i2) + exp(Si1,i3) + exp(Si2,i3) . (1)
SPoSE uses a maximum a posterori (MAP) estimation to find the most likely embedding given the training data and a prior:
argmax X log p(X|Dtrain) = argmax X log p(Dtrain|X) + log p(X), (2)
1Link to anonymous GitHub repository: https://anonymous.4open.science/r/VICE-59F0
where X is a matrix containing the embedding vectors for all of the items and p(X) is a prior for the embeddings, and p(Dtrain,j |X) is defined in (7). To induce sparsity in the embeddings, SPoSE uses a mean-field Laplace prior, leading to this objective:
argmax X ntrain∑ j=1 log p(Dtrain,j |X) + λ m∑ i=1 ||xi||1 (3)
Here, || · ||1 is the l1 norm, so ||x||1 = ∑p f=1 |xf |, and xf ≥ 0 for f = 1, . . . , p. The regularization parameter, λ, is selected out of a grid of candidate values by choosing the one that achieves the lowest (average) cross-entropy on the validation set (across twenty random seeds). The final dimensionality of the embedding, p, is determined heuristically from the data. If p is set to be larger than the number of dimensions supported by the data, the SPoSE algorithm will shrink entire dimensions towards zero by removing weights with a magnitude less than a given absolute threshold. While a threshold of 0.1 is suggested (Zheng et al., 2019), no justification is given for that particular value, which is problematic given that the number of dimensions removed is quite sensitive to that choice.
2.3 VICE
2.3.1 VARIATIONAL BAYESIAN INFERENCE
Given the goal of better approximating p(X|Dtrain), we use variational inference. We approximate p(X|Dtrain) with a variational distribution, qθ(X), where q is our chosen family of distributions, and θ is a parameter that is learned in order to optimize the Kullback–Leibler (KL) divergence to the true posterior, p(X|Dtrain). In variational inference, the KL divergence objective function is:
argmin θ Eqθ(X)
[ 1
ntrain (log qθ(X)− log p(X))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|X)
] (4)
In order to use variational inference, a parametric variational distribution must be chosen. For VICE, we use a Gaussian variational distribution with a diagonal covariance matrix qθ(X) = N (µ, diag(σ2)), where the learnable parameters θ are µ and σ. This means that each embedding dimension has a mean, the most likely value for that dimension, and a standard deviation, the propensity of the embedding value to be close to the mean.
Similarly to Blundell et al. (2015), we use a Monte Carlo (MC) approximation of the above objective function by sampling a limited number of Xs from qµ,σ(X) during training. We generate X by means of the reparameterization trick (Kingma & Welling, 2013), Xθ, = µ+ σ · where is an N × p matrix of standard normal variates, leading to the objective:
argmin θ
1
m m∑ j=1
[ 1
ntrain (log qθ(Xθ, j )− log p(Xθ, j ))−
1
ntrain ntrain∑ i=1 log p(Dtrain,i|[Xθ, j ]+ ] (5)
where j ∈ RN×p is entrywise N (0, 1) and where []+ is the ReLU function. As commonly done in the dropout and Bayesian neural network literature (Srivastava et al., 2014; Blundell et al., 2015; Gal & Ghahramani, 2016; McClure & Kriegeskorte, 2016), we set m to 1 for computational efficiency.
In Equation 5, the expected log-likelihood of the entire training data is computed. However, using the entire training data set to compute the gradient update often works poorly for non-convex objective functions. This is due to the expensive computational cost of each update and to the convergence to poorly generalizing solutions (Smith et al., 2020). As a result, we stochastically approximate (Robbins & Monro, 1951) the training log-likelihood using random subsets (i.e. mini-batches) of the training dataset, with each mini-batch consisting of b triplets. This leads to the final objective
argmin θ
1
ntrain (log qθ(Xθ, )− log p(Xθ, ))−
1
b b∑ i=1 log p(Dtrain,i|[Xθ, ]+)] (6)
recalling that
p(Dtrain,i|X) = exp(xy1,i
Txy2,i)
exp(xi1,i Txi2,i) + exp(xi1,i Txi3,i) + exp(xi2,j Txi3,i)
. (7)
2.3.2 SPIKE-AND-SLAB PRIOR
A key feature of SPoSE is sparsity. As discussed above, SPoSE induced sparsity using a zero-mean Laplace prior. We can empirically examine whether the Laplace prior is a realistic assumption, given the distribution of values in SPoSE dimensions. As Figure 1 depicts, these histograms do not resemble a Laplace distribution. Instead, it looks like there is a "spike" of probability at zero and a much smaller, but wide, "slab" of probability for the non-zero values of a SPoSE dimension. To model this, we use a spike-and-slab Gaussian mixture prior, as introduced in Blundell et al. (2015):
p(X) = N∏ i=1 p∏ f=1 (πN (xif ; 0, σ2spike) + (1− π)N (xif ; 0, σ2slab)) (8)
This prior has three parameters. π is the probability that an embedding dimension will be drawn from the spike Gaussian instead of the slab Gaussian. The standard deviations σspike and σslab control the likelihood of of an embedding value being set to 0 in the spike or slab distributions, respectively. xif is the embedding weight for the ith item in the f th dimension. Since spike and slab distributions are mathematically interchangeable, by convention we require that, σspike << σslab. In our experiments, these are chosen with grid search on one half of the validation set, the “tuning set”. (The other half of the validation set, the “pruning set”, is used for dimensionality reduction, as we describe in §2.3.4.)
2.3.3 PREDICTING THE ODD-ONE-OUT USING VICE
In this work, we consider two different prediction problems given a new triplet: (1) predicting the choice and (2) predicting the distribution of the choice. In either case, we start with computing the posterior probability distribution over the three triplet choices. If predicting a choice, then we output the choice with the maximum posterior probability. (For details on how we handle ties, see §3.3.1.) If the goal is to predict the distribution, then we return the predicted distribution.
The predicted probability distribution is computed from the variational posterior, qθ(X). When making predictions, we want to compute the probability of an odd-one-out for a given triplet. We approximate this probability by using an MC estimate fromm samplesXj = Xθ, j for j = 1, . . . ,m (Graves, 2011; Blundell et al., 2015; Kingma & Welling, 2014; McClure & Kriegeskorte, 2016; Blei et al., 2017). Mathematically, this means that we compute the predicted distribution as
p̂((yi1 , yi2)|{ci1 , ci2 , ci3}) ≈ 1
m m∑ j=1 p((yi1 , yi2)|{ci1 , ci2 , ci3}, {[x j i1 ]+, [x j i2 ]+, [x j i3 ]+}). (9)
2.3.4 DIMENSIONALITY REDUCTION FOR VICE
For interpretability purposes, it is crucial that the model does not use more dimensions than necessary. Zheng et al. (2019) accomplished this through the sparsity-inducing penalty that causes dimensions to shrink towards zero if they do not contribute to explaining the data. (see Section 2.2). Note, however, that these uninformative weights do not totally go to zero because of noise in the gradients. Hence, these dimensions were pruned by choosing a threshold for the L1 norm of the dimension based on looking at the “elbow plot” of the sorted L1 norms of the dimensions; this approach is subjective and highly dependent on the specific dataset. In VICE, the KL penalty we use has a similar effect of causing uninformative weights to shrink. Rather than using a user-defined threshold to prune dimensions, VICE exploits the uncertainty information obtained in training the model to select a set of informative dimensions. The pruning procedure consists of three steps: (1) assigning an importance score to each dimension; (2) clustering dimensions by importance; and (3) choosing the subset of clusters that best explains the validation set. We describe each of these three steps in detail below.
Assign an importance score to each dimension Intuitively, the importance score reflects the number of objects that we can confidently say have non-zero weight in a dimension. To compute the score, we start by using the variational embedding for each item i – location µij and scale σij parameters, to compute the posterior probability that the weight will be truncated to zero according to the left tail of a Gaussian distribution with that location and scale (as described in §2.3.1). This gives us a posterior probability of the weight taking the value zero for each item within a dimension (Graves, 2011). To calculate the overall importance of a dimension, we estimate the number of items that plausibly have non-zero weights given a user-specified False Discovery Rate target (FDR) (Benjamini & Hochberg, 1995). FDR provides a method for inferring the number of hypotheses which are non-null, based on an array of p-values, with statistical guarantees on the expected proportion of false rejections. We define dimension importance as the number of rejections given by the BH(q) algorithm, with the FDR tolerance q specified by the user, using the posterior zero-probabilities as the p-values.
Cluster dimensions by importance using a Gaussian mixture model Given the importance scores in the previous step, a reasonable approach would be to sort dimensions by importance, and then use the left-out half of the validation set, or “pruning set”, to determine the k most important dimensions to include. However, we found that this approach led to high variance, due to the existence of groups of dimensions with very similar importance scores. We hypothesized that these groups of dimension corresponded to different feature types, as observed in Zheng et al. (2019). As McRae et al. (2005) discusses, these features can be grouped into different feature types, such as categorical, functional, encyclopedic, visual-perceptual and non-visual-perceptual. Therefore, the second step in our pruning method creates clusters of dimensions that have similar importance. We fit GMMs with varied number of components k (e.g. k ∈ {1, 2, ..., 6}) to the importance scores for each dimension, and find the number of components/modes that show the lowest Bayesian Information Criterion (BIC). Here, we limit the number of possible clusters to 6, as a conservative estimate on the number of distinct feature types (e.g. categorical, functional, perceptual) with possibly differing sparsity ranges–i.e., categorical features may apply to a large subset of items, while specific visual features might apply only to a handful. We cluster dimensions into k modes.
Choosing the subset of dimension clusters that best explains the validation set We find the best non-empty subset of clusters of dimensions, in terms of cross-entropy on the validation “pruning” set, and prune all clusters of dimensions outside of this subset. (If a given feature is uninformative, then features with similar importance scores are likely to be similarly uninformative.)
3 EXPERIMENTS
3.1 DATA
We used two datasets from Zheng et al. (2019), selected after quality control. The first contained judgments on 1,450,119 randomly selected triplets. We used a random subsample of 90% of these triplets for the training set, and the remaining 10% for the validation set (tuning and pruning). The second was an independent test set of 19,968 triplets with 25 repeats for each of 1,000 randomly selected triplets; none of these were present in the training set. Having this many repeats allows us to be confident of the response probability for each triplet. Furthermore, it allows us to establish a model-free estimate of the Bayes accuracy, the best possible accuracy achievable by any model.
3.2 EXPERIMENTAL SETUP
Training We implemented both SPoSE and VICE in PyTorch (Paszke et al., 2019) using Adam (Kingma & Ba, 2015) with α = 0.001. To guarantee a fair comparison between VICE and SPoSE, each model configuration was trained using 20 different random seeds, for a fixed number of 1000 epochs. Each model was initialized with a weight matrix, W ∈ RD×N , where D was set to 100 and N refers to the number of unique items in the dataset (i.e., 1854). In preliminary experiments, we observed that, after pruning, no model was left with a latent space of more than 100 dimensions, which is why we did not consider models with higher initial dimensionality.
Other details Please see section §A.1 for weight initialization and hyperparameter tuning.
3.3 PREDICTION EXPERIMENTS
3.3.1 EVALUATION MEASURES
Prediction accuracy Since human triplet choices are represented as three-dimensional one-hotvectors, where 1 represents the odd-one-out choice for a particular triplet, it is simple to compare them with model choices. The choice of a model is computed as argmax p(ŷ|θ), where p(ŷ|θ) refers to a model’s softmax probability distribution over a triplet given the model parameters (see Equation 9). If there is a tie in the softmax output, we regard this as an incorrect choice. A model can either be correct or incorrect, and no partial credit is given, guaranteeing a conservative measure of a model’s prediction behavior. The reported prediction accuracy is the fraction of trials where the model predicted the correct odd-one-out item. We can get an estimated upper bound on the Bayes accuracy, i.e., the best possible accuracy of any model, by using the repeats in the independent test set. As the optimal model predicts the repeat majority outcome for any triplet, this accuracy ceiling – 0.673 – is the average probability of the majority outcome over the set of all triplets.
Predicting Human Uncertainty The triplet task is subjective: there is no correct answer to any given triplet, and often subjects give all three. The independent test set gives us the probability distribution over answers for each triplet, graded information about the relative similarities of the three item pairs. Predicting this distribution precisely is a more stringent test of model quality than prediction accuracy, and of even more relevance in cognitive science applications. We quantify this through the KL divergence between the softmax probabilities of a model (see Section 2.3.3) and the empirical human probability distributions, obtained by computing discrete probability distributions for triplet repeats on the independent test set (see Section 3.1). We use the KL divergence because it is a commonly used measure for assessing the similarity between two probability distributions.
3.3.2 EXPERIMENT RESULTS
Full dataset We compared pruned median models of VICE and SPoSE, where the median model was identified by the median cross-entropy error on the tuning set. For VICE, we set the number of MC samples to m = 50 (see Equation 9) . On the independent test set, VICE and SPoSE achieved a similarly high prediction accuracy of 0.6380 and 0.6378, respectively, versus a chance-level accuracy of 0.333(3). Likewise, VICE and SPoSE achieved similar KL-divergences of 0.103 and 0.105, respectively, versus a chance-level KL-divergence of 0.366. The differences between the median model predictions across individual triplets in the test set were not statistically significant, under the null hypothesis according to a two-sided paired t-test, for either accuracy or KL-divergence. Hence, VICE and SPoSE predicted triplets equally well when they were both trained on the full dataset. This is not surprising, as Bayesian methods based on MC sampling become more like deterministic Maximum Likelihood Estimation (MLE) the more training data is available, per the Bernstein-von Mises theorem (Doob, 1949). As a result, the effects of the prior are most prominent when models are trained on datasets where ntrain is not particularly large, as we will see in the next section.
Efficiency on smaller datasets Performance on small datasets is especially important in cognitive science, where behavioral experiments often have low sample sizes (e.g. tens to hundreds of volunteer
in-lab subjects ) or can be costly to scale in AMT. To test whether VICE can model the data better than SPoSE when data are scarce, we created non-overlapping subsets of the training dataset. Specifically, we did this for subsets with sizes equal to 5%, 10%, 20%, and 50% of the dataset, yielding 20, 10, 5, and 2 subsets, respectively. Validation and test sets were unchanged. In Figure 3, we show the average prediction accuracy and KL divergence across random seeds, for models trained on every dataset size, including the full training set. Averages were computed across both random seeds and training subsets, where the average over random seeds was identified first to get a per-subset estimate; performance across subsets was then used to compute the confidence intervals (CIs). Figure 3 shows that the difference in prediction accuracy and KL divergence between VICE and SPoSE became more pronounced the fewer triplet samples were used for training. The difference was striking for the 5% and 10% data subsets, with ≈ 67, 500 and ≈ 135, 000 triplets, respectively. In the former, SPoSE predicted at chance-level; in the latter, it showed a large variation between random seeds and data splits, as can be seen in the 95% CIs in Figure 3. In both low-resource scenarios, VICE showed a compellingly small variation in the two performance metrics across random seeds, and predicted much better than chance-level. The differences between VICE and SPoSE for the 5% and 10% subsample scenarios were statistically significant according to a two-sided paired t-test (p < 0.001), comparing individual triplet predictions between the pruned median models.
3.4 REPRODUCIBILITY EXPERIMENTS
Beyond predictive performance, a key criterion for learning concept representations is reproducibility, i.e., learning similar representations when using different random initializations on the same training data. To assess this, we compare 20 differently initialized VICE and SPoSE models.
The first aspect of reproducibility is finding similar numbers of dimensions, quantified as the standard deviation of that number across all 20 models. As shown in Table 1, VICE identified fewer dimensions than SPoSE, and this had a lower standard deviation across models (1.64 vs 2.30) The difference in standard deviation is, however, not statistically significant according to a two-sided F-test (F = 0.516, df = 19, p = 0.918). The second aspect is the extent to which the dimensions identified are similar across initializations. Since the embedding is not an ordered set of dimensions, we will deem a dimension learned in one VICE model reproducible if it is present in another independently trained instance of VICE, perhaps in a different column or with some small perturbation to the weights. To evaluate the number of highly reproducible dimensions we correspond each embedding dimension of a given initialization (after the pruning step) with the most similar embedding dimension (in terms of Pearson correlation) of a second initialization. Given 20 differently initialized models, we quantify reproducibility of a dimension as the average Pearson correlation between one dimension and its best match across the 19 remaining models. In Table 1, we report the average number of dimensions with a Pearson correlation > 0.8 across the 20 initializations. Selected dimensions are similarly reproducible between VICE and SPoSE (see Table 1). Finally, we investigated whether our uncertainty-based pruning procedure selects reproducible dimensions. We compared the average reproducibility of selected dimensions with the average reproducibility of pruned dimensions, which are discarded by the procedure The average reproducibility of the best subset, i.e., % of dimensions with Pearson’s r > 0.8, is 79.00%, whereas that of the dimensions that were pruned was 0.00%, i.e. our procedure is highly accurate at identifying those dimensions that reproduce reliably.
3.5 INTERPRETABILITY
One of the benefits of SPoSE is the interpretability of the dimensions of its concept embeddings, induced by sparsity and positivity constraints, and empirically tested through experiments in Hebart et al. (2020). VICE constrains the embeddings to be sparse through the spike-and-slab prior, and imposes a non-negativity constraint through applying a rectifier on its latent representations. This
means that, just as in SPoSE, it is easy to sort objects within a VICE dimension by their absolute weight values in descending order, to obtain human judgments of what a dimension represents. In Figure 4, we show the top six objects for four example VICE dimensions of the pruned median model, representing categorical, functional, structural, and visual information. In Appendix A.2 we show the top ten objects for every VICE dimension. Redoing the SPoSE dimension labeling experiments is beyond the scope of this paper, but we provide dimension labels from a small survey in the Appendix.
4 DISCUSSION
In this paper, we introduced VICE, a novel approach for embedding concepts in a non-negative, sparse space, and using those embeddings to predict human behavior in an odd-one-out task. We solve the same problem as an existing method, SPoSE, but using variational inference and a spikeand-slab prior, which is more appropriate for this modeling situation. VICE yields uncertainty information about the solution, enabling a statistical procedure to automatically determine the number of embedding dimensions, as opposed to the data-dependent heuristics that were used in SPoSE. VICE performs as well as SPoSE in terms of accurately predicting human decisions in an odd-one-out task and modeling the probability distribution over those decisions, but using fewer dimensions. However, this is the case only for the large dataset that was originally used to develop SPoSE. VICE performs substantially better than SPoSE on smaller datasets. Moreover, VICE is more stable than SPoSE, as the dimensionality of the embeddings varies less across random initializations. We believe these improvements stem from the combination of the prior and the dimension selection procedure.
We developed VICE with the goal of making it easier to build interpretable embedding spaces to model any type of item, by using an odd-one-out task. We require fewer participants than SPoSE due to higher data efficiency, which makes behavioral experiments more feasible. Our procedure for determining the number of dimensions aims at removing subjectivity, as the scientific motivation often is to discover the minimum number of latent factors required to describe observations. Note that while the user does need to choose the FDR tolerance q, this can be done before looking at any data, based on their degree of conservatism with regards to controlling false discoveries. Hence, it does not cause the problems associated with data-dependent tuning parameters such as regularization parameters or absolute thresholds. Finally, the spike-and-slab prior we use in VICE has intrinsically meaningful hyperparameters, which makes it easier for researchers to specify competing hypotheses about the representation space being studied.
A APPENDIX
A.1 EXPERIMENTAL SETUP
Weight initialization We initialized the weights of the encoder for the means of the distributions, Wµ, following a Kaiming He initialization (He et al., 2015). The weights of the encoder for the logarithm of the scales of the distributions, Wlog (σ), were initialized with = − 1sW0µ , such that W 0log(σ) = 1. This initialization allowed us to avoid bias terms within the linear transformations of the encoders, and additionally ensured, through computing σ = exp (log (σ)), that σ is a small continuous number in R+ at the beginning of training.
Hyperparameter grid To find the optimal VICE hyperparameter combination, we performed a grid search over π, σspike, σslab (see Equation 8). The final grid was the Cartesian product of these parameter sets: π = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, σspike = {0.125, 0.25, 0.5, 1.0, 2.0}, σslab = {0.25, 0.5, 1.0, 2.0, 4.0, 8.0}, subject to the constraint σspike << σslab, where combinations that did not satisfy the constraint were discarded. We observed that setting σslab > 8.0 led to numerical overflow issues during optimization, which is why 23 was the upper-bound for σslab. For SPoSE, we use the same range as Zheng et al. (2019), with a finer grid of 64 values.
Optimal hyperparameters We found the optimal VICE hyperparameter combination through a two step procedure, for which the validation set was split into equally sized pruning and tuning sets. First, among the final 180 combinations (see Cartesian product above), we applied our pruning method (see Section 2.3.4) to each model and kept the subsets of dimensions that led to the lowest cross-entropy error on one half of the validation set, which we call pruning set. Second, we evaluated each model with pruned parameters on the other half of the validation set, which we refer to as tuning set. We defined the optimal hyperparameter combination as that with the lowest average cross-entropy error on the tuning set across twenty different random initializations. The optimal hyperparameter combinations for VICE were σspike = 0.125, σslab = 1.0, π = 0.4; for SPoSE, it was λ = 5.75.
A.2 OBJECT DIMENSIONS
Here, we display the top 10 objects for each of the 47 VICE dimensions, according to their absolute weight value. As we have done for every other experiment, we used the pruned median model to guarantee the extraction of a representative sample of object dimensions without being over-optimistic with respect to their interpretability (see Section 3.3.1 for how the median model was identified). For each dimension we collected human responses in a small survey with a sample of convenience (n = 9). The labels that are shown below each object dimension represent the most common answer across human responses, when they were asked to name the respective dimension. More than one label is displayed, whenever there was a tie in the most common response. Labels were edited for coherence across similar answers (e.g. "metallic" and "made of metal" were deemed to be the same answer).
While the illustrations for each dimension display the top 10 items, for our survey, in order to avoid biasing our results, we actually show a continuum of items selected from bins centered around the 25, 50, and 75 percentiles in addition to top items, and a random set of items with close to zero weight in the dimension.
METALLIC FOOD
PLANTS ANIMAL
HOME CLOTHES
OUTDOOR WOOD; MADE OF WOOD
POINTY; ELONGATED BODY PARTS
VEHICLE; TRANSPORTATION EXQUISITE; TRADITIONAL
ELECTRONIC COLORFUL
ROUND; CIRCULAR MANY OBJECTS; COLLECTION
STATIONERY; OFFICE SPORTS; GAMES
DECORATIVE; BEAUTIFUL CONTAINER; DRINKS
MARINE; WATER RED
BATHROOM; HYGIENE WAR; WEAPON
BLACK DUST; GRAINY TEXTURE
SPHERICAL; ROUND GREEN
WHITE SKY; FLYING
FLOOR; PATTERN LINES; GRATING PATTERN
MUSIC; SOUND SKY; TALL
INSECTS; PESTS N/A
FIRE; SMOKE FOOT; FOOTWEAR
CHAIN; ROPE; STRAND YELLOW; ORANGE
EYEWEAR; EYES; FACE SPIKY; HAIRY
CYLINDRICAL; ELONGATED STRINGY; FIBROUS
BABY; CHILDREN MEDICAL; HEALTHCARE
ICE; COLD | 1. What is the focus of the paper regarding concept embedding in the odd-one-out task?
2. What are the strengths and weaknesses of the proposed method, particularly in using variational inference and Gaussian variational family?
3. Do you have any concerns about the selection of the variational family and its impact on the prior?
4. How does the proposed method differ from SPoSE, and what are the improvements shown in the experiments?
5. Can you provide more clarification on the training objective, pieces of the model, and the definition of the encoder?
6. How does the model handle the prediction of the odd-one-out task, and what is the purpose of the truncation of dimensions?
7. Are there any biases in the method, such as looking only at clumps of samples from high values in the dimension?
8. Would an ablation study be helpful to understand the contributions of the different steps in the dimensionality reduction process to the final representations? | Summary Of The Paper
Review | Summary Of The Paper
The paper introduces a variational inference method for concept embedding in the odd-one-out task. The objective is to learn representations that allow to predict the odd object from a triplet. A variational inference problem is established to learn the representations through a Gaussian variational family with a mixture of two Gaussians as prior (spike-and-slab). It is not clear how the selection of the variational family fits the prior. Simultaneously, there is a dimensionality reduction procedure to improve the representations. The experiments show improvement over the Sparse Positive object Similarity Embedding (SPoSE), and the method is also evaluated with random initialization and it shows to be stable.
Review
Strengths:
The paper continues an interesting problem (odd-one-out) classification.
The idea of using variational inference to understand the problem is straightforward and interesting.
Weaknesses:
In (8), it is not clear whether
p
(
D
train
∣
X
)
is the same normalized similarity used in SPoSE (2).
What is the impact of using a Gaussian variational family and having a mixture of Gaussians as a prior? My guess is that your variational distribution
q
will end up wide and will try to encompass the mixture, or if the spike is too dominant it may collapse to it. Why not use a mixture for the variational family as well?
What is the model you are training? What is the definition of your encoder that produces the embeddings
X
? Does it receives a single image or a triplet?
What is the training objective that you are using? And what are the pieces of your model? It is not clear whether you are training an encoder
q
that receives a single data point and minimizes a batch of triplets w.r.t. the average of the normalized similarities (8), or if you are also using a prediction head (or classifier) for the odd-one-out task as well (12) to regularize the model.
In other words, are you training your encoder with only the similarities from the triplets, or you are using the odd-one-out classification task as well?
In Section 2.3.4, the prediction of the odd-one-out seems to be the normalized similarity pair-wise. So, the reader will assume that the prediction is through doing the 3 pairs from the triplet and selecting the one with minimum similarity. However, in Section 3.3.1, it seems that the prediction occurs as a one-hot-vector prediction. Which one is it? The usage of your model for the prediction task must be clearly stated and defined.
Why is the truncation of the dimensions needed? One could still traverse the dimensions and see clusters of points through them.
Similar to the previous comment, the interpretability of the dimensions explores the maximum values to check for clusters in this neighborhood. However, wouldn't it be interesting to see the traversals through the dimension to see how the samples change or not?
Isn't it bias to look only at the clump of samples from the high values in the dimension (assuming that is the weight)? One will expect to see similar clusters since the embeddings are close together for the maximum values. Similar to other parameters, why not to do an ablation study over the truncation too?
I found hard to follow the process of doing the dimensionality reduction. It seems to be summarized, and I think its incorporation to the model impacts the final performance. I suggest to improve the description and details of how this process works. If space is needed, instead of reproducing SPoSE (Section 2.2), consider removing it.
How one can know whether the improvement of the representations come from the variational representation or from the different steps within the dimensionality reduction process? I would recommend to do an ablation study to understand the contribution of the different steps on the dimensionality reduction to understand their impact on the final representations. |
ICLR | Title
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Abstract
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
N/A
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
1 INTRODUCTION
The increasingly popular pre-training paradigm (Dai & Le, 2015; Devlin et al., 2018; Gururangan et al., 2020) involves first training a generalist model on copious amounts of easy-to-obtain data, e.g. raw text data in NLP, and then using this model to initialize training on a wide swath of downstream tasks. Generalist models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have a strong appeal; a few institutions with significant resources incur the cost of training these large models whilst the rest of the research community enjoys a significant performance improvement at minimal computational overhead. However, the advantages of initializing a downstream task from a generalist model are not guaranteed. Previous work has shown that the benefits of pre-training depend heavily on the degree of domain overlap between the end-task data and the massive, heterogenous data on
which the generalist model was trained (Beltagy et al., 2019; Gururangan et al., 2020).
Notably, Gururangan et al. (2020) have demonstrated the benefits of continued pre-training of generalist models using data that is similar to that of the end-task. Their approach is formalized into two classes: Domain Adaptive Pre-training (DAPT) and Task Adaptive Pretraining (TAPT) where further stages of pre-training of generalist models are conducted on domain- and task-specific data,
respectively. DAPT and TAPT exploit the fact that we often know the end-task beforehand, and so we can make specific choices about our pre-training regimen to improve end-task performance.
However, in both pre-training for generalist models and continued pre-training, the training procedure itself does not explicitly incorporate the end-task objective function. Because of this, practitioners have to be careful with their choice of auxiliary tasks, the order in which they are trained on, and the early-stopping criteria for each pre-training stage so as to actually achieve good downstream end-task performance (Gururangan et al., 2020; Dery et al., 2021). In the absence of principled criteria to make these difficult design choices, it is common to instead resort to the computationally demanding heuristic of pre-training on as much data as possible for as long as possible.
In this paper, we raise the following question: “In settings where we have a particular end-task in mind, should we be pre-training at all?”. We define pre-training as any form of task-agnostic training that a model undergoes before it is finally fine-tuned on the end-task of interest. As a first milestone in addressing the larger question posed above, we explore the ubiquitous continued pretraining setting (Gururangan et al., 2020; Aghajanyan et al., 2021). Specifically, our paper questions the wisdom of having disjoint further pre-training then fine-tuning steps on a generalist model. In response, we advocate for an alternative approach in which we directly introduce the end-task objective of interest into the learning process. This results in a suite of end-task aware methods called TARTAN (end-Task AwaRe TrAiniNg). Our formulations incorporate both unsupervised auxiliary objectives traditionally used in NLP pre-training (such as masked language modeling as in Devlin et al. (2018)) and the end-task objective, followed by an optional fine-tuning step on the end-task. We motivate TARTAN experimentally in the continued pre-training setting and based on this, we make the following contributions to the literature on leveraging auxiliary tasks and data:
• In lieu of standard end-task agnostic continued pre-training, we suggest introducing the end-task objective into the training process via multi-task learning (Caruana, 1997; Ruder, 2017). We call this procedure Multi-Tasking end-Task AwaRe TrAiniNg (MT-TARTAN) (Section 3.1). MTTARTAN is a simple yet surprisingly effective alternative to task-agnostic pre-training. In Section 5, we demonstrate that MT-TARTAN significantly improves performance and data efficiency over Gururangan et al. (2020)’s results. It also obviates the need for fickle hyper-parameter tuning through direct optimization of validation performance. • To allow more fine-grained control of the end-task over the auxiliary tasks, in Section 3.2, we present an online meta-learning algorithm that learns adaptive multi-task weights with the aim of improving final end-task performance. Our META-learning end-Task AwaRe TrAiniNg (METATARTAN) allows us to robustly modulate between multiple objectives and further improves performance over MT-TARTAN . • A naive implementation of META-TARTAN based on first-order meta-learning analysis results in a sub-optimal algorithm that ignores all tasks except the end-task. We trace this problem to the use of a single model training head for computing both the end-task training loss and meta-objective (end-task validation loss). To guard against this pathological solution, we introduce a separate model head for computing the meta-objective. In Section 3.3, we justify this simple-to-implement fix and validate its practical efficacy in Section 5.
Our results suggest that TARTAN may be an attractive alternative to the continued pre-training paradigm, and further research into the place of pre-training in end-task aware settings is warranted.
2 FORMALIZING PRE-TRAINING AND CONTINUED PRE-TRAINING
Consider a dataset D = {(xi, yi)i∈[m]} consisting of m labelled examples. We define a task as an objective function and dataset pair: T = {L(·), D}. Mθ is a model parameterized by θ. The objective function L(yi,Mθ(xi)) evaluates how well a model prediction Mθ(xi) fits the true label yi, such as cross-entropy loss in the case of classification. Note that the task dataset, D, is typically decomposed into the sets (Dtrain, Dval, Dtest). Dtrain is the set of examples used for model training whilst Dtest is used for final task evaluation. The validation set, Dval, is typically used for model selection but it is also frequently used in meta-learning to define the meta-objective – Lval. Given a specific end-task T ∗, our aim is to improve performance on T ∗ (as measured by the model loss on DtestT∗ ) by leveraging auxiliary tasks Taux = {T1, . . . , Tn}. Note that we do not particularly
care about the performance of any of the tasks in Taux. We are willing to sacrifice performance on Taux if it improves performance on T ∗.
From the perspective of model architecture, there are several ways to leverage Taux. We focus on the simple but widely-used parameter sharing setting. Here, all tasks share a model body θbody but each task Ti has its own head φi for prediction. We denote the head belonging to T ∗ as φ′. Thus θ = [ θbody; ( φ1, . . . , φn, φ′ )] and θbody is reusable across new tasks.
2.1 PRE-TRAINING
Pre-training is when a model is first trained on Taux before performing a final fine-tuning phase on T ∗. The motivation behind pre-training is that learning Taux first hopefully captures relevant information that can be utilized during training of T ∗. This desire has led to the proliferation of generalist pre-trained models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and GPT3 (Brown et al., 2020) that have been trained on copious amounts of data. Generalist models have been widely successful at improving downstream task performance when used as initilization.
We can formalize the pre-training procedure as follows:
θ0 = argminθ ( ∑ Ti∈Taux LTi(θ) )
(1)
In Equation 1, we seek a point θ0 that achieves minimal loss on the tasks in Taux. We hope that θ0 will be a good starting point for gradient descent on T ∗. Let g(θ0) represent the set of end-points of stochastic gradient descent on an initialization, θ0. Stochastic gradient descent from the same initialization can produce different end-points due to differences in hyper-parameters like learning rate, batch size and order, as well as regularization strength. We can write the fine-tuning phase as:
θ∗ = argmin{θ ∈ g(θ0)} LT∗(θ) (2)
Note that pre-training is end-task agnostic: the pre-training Equation 1 occurs entirely before training on the end-task Equation 2, and does not explicitly incorporate the end-task objective, T ∗. Since there is no awareness of the end-task during pre-training it is important to carefully choose Taux so that pre-training actually results in improved performance on T ∗ (Wang et al., 2018a). For text data, past work has found left-to-right language modeling (Peters et al., 2017) and masked language modeling (MLM) (Devlin et al., 2018) to be good choices to include in Taux.
2.2 CONTINUED PRE-TRAINING
Recent work (Beltagy et al., 2019; Gururangan et al., 2020; Lee et al., 2020) showed that downstream performance on T ∗ can be improved by further adapting generalist models via continued pre-training on a more relevant set of auxiliary tasks. This is equivalent to sequentially performing multiple steps of Equation 1, with different Taux, before finally performing Equation 2 on T ∗. Domain and Task Adaptive Pre-training Gururangan et al. (2020) present Domain Adaptive PreTraining (DAPT) and Task Adaptive Pre-Training (TAPT) as methods for continued pre-training. During DAPT, a generalist model is further pre-trained on an unsupervised objective with large amounts of data from the same domain as the end-task. TAPT also pre-trains with the same unsupervised objective as DAPT, but on the actual dataset of the end-task. Gururangan et al. (2020) find that performance can be further improved by chaining objectives, DAPT first, followed by TAPT.
Though TAPT and DAPT do not directly incorporate the end-task objective during training, it still indirectly informs both the choice of pre-training data and the order in which the pre-training tasks are trained on. Below, we explore stronger versions of this influence.
3 END-TASK AWARE TRAINING (TARTAN)
In this section, we argue for the end-task to be added directly into the training process to create explicit interactions between T ∗ and Taux.
3.1 END-TASK AWARE TRAINING VIA MULTI-TASKING (MT-TARTAN)
We propose to directly incorporate knowledge of the end-task by multi-tasking T ∗ together with Taux, before optionally fine-tuning on T ∗ exclusively. To this end, we introduce a set of task weights w = (w∗, w1, · · · , w|Taux|) satisfying w∗ + ∑ i wi = 1, to modulate between the different losses. Our new formulation is:
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ i wiLTi(θ) )
(3)
Here, Equation 3 replaces Equation 1 and can be followed by the optional fine-tuning stage of Equation 2. Note that this formulation fixes the tasks weights w throughout the training process. We call this formulation End-task Aware Training via Multi-tasking (MT-TARTAN) because we introduce the end-task directly into the training procedure, and do so by multi-tasking it with Taux.
MT-TARTAN allows us to prioritize performance on T ∗ in several ways. First, we can weight the end-task higher than all the other auxiliary tasks. Also, during training, we can monitor LT∗ on the end-task validation set and early stop when it plateaus; even if the auxiliary tasks have not yet converged. This is not possible during standard pre-training because we do not train T ∗ and so it performs at random before we actually start fine-tuning. Early stopping on T ∗ can represent significant computational savings over end-task agnostic pre-training when the savings in data-efficiency supercede the extra overhead of end-task aware gradient descent steps.
3.2 END-TASK AWARE TRAINING VIA META-LEARNING (META-TARTAN)
MT-TARTAN, DAPT and TAPT, all share the same drawback: they implicitly assume that the auxiliary tasks have static importance to the end-task over the lifetime of its training, either by being end-task agnostic (DAPT and TAPT) or by having static task weights (MT-TARTAN). With MTTARTAN, an additional drawback noted by Wang et al. (2019); Yu et al. (2020) is that multi-tasking can negatively impact task performance compared to isolated training. These shortcomings motivate the formulation of an adaptive algorithm that can mitigate the negative influence of some tasks whilst responding to the changing relevance of auxiliary tasks over the lifetime of end-task training.
As they stand, the pre-training equation pair (Equations 1, 2) and the MT-TARTAN pair (Equations 2, 3) are decoupled. The inner-level variables of the pre-training phase do not depend on the outerlevel variables of the fine-tuning phase. Thus the equation pairs are typically solved sequentially. We propose to tightly couple Equations 2 and 3 by formulating jointly learning w and θ0 as a bi-level optimization problem. A bi-level formulation allows us to leverage meta-learning (Schmidhuber, 1995) techniques to learn adaptive task weights which capture variable auxiliary task importance whilst mitigating the contribution of harmful tasks. We propose a meta-learning algorithm in the mold of Model Agnostic Meta-Learning (MAML) (Finn et al., 2017) to learn task weights. As a bi-level problem, this can be formulated as :
θ∗,w∗ = argmin{θ ∈ g(θ0), w} LT∗(θ) (4) where
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ Ti∈Taux wiLTi(θ) )
(5)
We want to jointly learn w, with θ0, such that taking a gradient descent step modulated by w leads to improvement in end-task generalization. We use performance on the end-task validation set (DvalT∗ ) as a meta-objective to train w. Performance on DvalT∗ serves as a stand-in for end-task generalization performance whilst also naturally capturing the asymmetrical importance of T ∗.
Our joint descent algorithm proceeds as follows. At each timestep t, we hold the task weights fixed and update θt based on ∇θLtotal(θt,w). We then proceed to update w via gradient descent on the end-task validation loss at θt+1. For this, we derive an approximation for∇wLvalT∗ (θt+1,w) below:
LvalT∗ (θt+1(w)) = LvalT∗ ( θt − β ( w∗∇LT∗ + ∑ i wi∇LTi ))
≈ LvalT∗ (θt)− β ( w∗∇LT∗ + ∑ i wi∇LTi )T ∇LvalT∗ (θt)
We can take the gradient of the above first-order approximation w.r.t an individual weight wi. This tells us how to update wi to improve the meta-objective.
∂LvalT∗ (θt+1(w)) ∂wi
≈ −β ( ∇LTi )T (∇LvalT∗ (θt)) = −β(∇LTi)T (∇LvalT∗ ([θbody, φ′]t)) (6) In Equation 6, we explicitly specify [ θbody, φ ′] t
because computing losses on T ∗ depend on only these parameters. LTi depends solely on [ θbody, φ i ] t but we leave this out to avoid notation clutter.
Our analysis above is similar to that of Lin et al. (2019) with one key difference: we learn a weighting for the main task w∗ too. This ability to directly modulate T ∗ allows us to capture the fact that at certain stages in training, auxiliary tasks may have greater impact on end-task generalization than the end-task’s own training data. This choice also allows us to control for over-fitting and the influence of bad (mislabelled or noisy) training data.
3.3 INTRODUCING A SEPARATE CLASSIFICATION HEAD FOR META-LEARNING
Observe that from Equation 6, updates forw 6= w∗ involve gradients computed from different model heads φi and φ′ whilst for w∗, we are taking the dot product of gradients from the same end-task head φ′. As we will show empirically in Section 5.4, computing weight updates this way creates a strong bias towards the primary task, causing w∗ to rail towards 1 whilst the other weights dampen to 0, which may be sub-optimal in the long run.
Intuitively, this short-horizon (greedy) (Wu et al., 2018) behavior makes sense: the quickest way to make short-term progress (improve LvalT∗ (θt+1)) is to descend solely on T ∗. More formally, the greedy approach arises because we derive ∇wiLvalT∗ (θt+1) in Equation 6 as a proxy for the gradient at θ∗, the outer-loop end-point in Equation 4. Variations of this substitution are common in the meta-learning literature (Finn et al., 2017; Liu et al., 2018; Nichol et al., 2018) because it is computationally infeasible to train a model to convergence every time we wish to compute∇wiLvalT∗ (θ∗).
To remedy the greedy solution, instead of estimating ∇θLT∗ and ∇θLvalT∗ from the same classification head (Equation 6), we introduce a special head φ∗ for computing the meta-objective. Specifically, instead of trying to compute θ∗, we approximate it by fixing the body of the network θbody and training the randomly initialized head φ∗ to convergence on a subset of the end-task training data. We do this every time we wish to estimate∇wiLvalT∗ (θ∗). Introducing φ∗ eliminates the strong positive bias on w∗ and enables us to compute a better proxy for the meta-gradient at θ∗:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t)) (7) Equation 7 represents a simple-to-implement alternative to Equation 6. We provide a more detailed justification for Equation 7 in Appendix A.1. In Section 5.4, we empirically validate that the transition from Equation 6 to 7 improves performance whilst mitigating pathological solutions. Our approach of creating φ∗ for approximating the meta-objective (down-stream validation performance) is inspired by Metz et al. (2018), who use a similar technique to construct a meta-objective for evaluating the quality of unsupervised representations.
Please see Algorithm 1 in Appendix A.3 for details about META-TARTAN.
4 EXPERIMENTAL SETUP
Setting1 Though our algorithms and methodology can be directly applied to both continued pretraining (Section 2.2) and pre-training from scratch (Section 2.1) of generalist models, we focus on the former scenario. This is because the continued pre-training setting is more common amongst everyday practitioners as it is less computationally demanding. It thus lends itself more easily to exploration under a realistic computational budget. In Appendix A.4, we show that end-task aware training from scratch is viable by studying a simple computer vision setting. Concurrent work by Yao et al. (2021) shows that from-scratch end-task aware training for NLP problems is viable.
1Code will be released at https://github.com/ldery/TARTAN
In keeping with previous work (Devlin et al., 2018; Gururangan et al., 2020), we focus on Taux as a set of MLM tasks on varied datasets. In the case of DAPT and our end-task aware variants of it, Taux is an MLM task with data from the domain of the end-task. For TAPT, Taux is an MLM task with data from the end-task itself. DAPT, TAPT and DAPT+TAPT (chained pre-training with DAPT followed by TAPT) will serve as our baseline continued pre-training approaches. We will compare these baselines to their end-task aware variants that use MT-TARTAN and META-TARTAN.
Datasets Our experiments focus on two domains: computer science (CS) papers and biomedical (BIOMED) papers. We follow Gururangan et al. (2020) and build our CS and BIOMED domain data from the S2ORC dataset (Lo et al., 2019). We extract 1.49M full text articles to construct our CS corpus and 2.71M for our BIOMED corpus. Under both domains, our end-tasks are low-resource classification tasks. Using low-resource tasks allows us to explore a setting where pre-training can have a significant impact. Under the CS domain, we consider two tasks: ACL-ARC (Jurgens et al., 2018) and SCIERC (Luan et al., 2018). ACL-ARC is a 6-way citation intent classification task with 1688 labelled training examples. For SCIERC, the task is to classify the relations between entities in scientific articles. This task has 3219 labelled examples as training data. We choose CHEMPROT (Kringelum et al., 2016) as the classification task from the BIOMED domain. This task has 4169 labelled training examples and the goal is to classify chemical-protein interactions. More details of these datasets can be found in Table 2 of Gururangan et al. (2020). Gururangan et al. (2020) evaluate against all 3 tasks and their available code served as a basis on which we built MT-TARTAN and META-TARTAN.
Model Details We use a pre-trained RoBERTabase (Liu et al., 2019) as the shared model base and implement each task as a separate multi-layer perceptron (MLP) head on top of this pre-trained base. As in Devlin et al. (2018), we pass the [CLS] token embedding from RoBERTabase to the MLP for classification.
Training Details For DAPT and TAPT, we download the available pre-trained model bases provided by Gururangan et al. (2020). To train thier corresponding classification heads, we follow the experimental setup described in Appendix B of Gururangan et al. (2020).
Performing end-task aware training introduces a few extra hyper-parameters. We fix the other hyperparameters to those used in Gururangan et al. (2020). MT-TARTAN and META-TARTAN introduce joint training of a classification head for the end-task T ∗. We experiment with batch sizes of 128, 256 and 512 for training this head. We try out learning rates in the set {10−3, 10−4, 10−5} and drop out rates of {0.1, 0.3}. For META-TARTAN since we are now learning the task weights, w, we test out task weight learning rates in {10−1, 5 × 10−2, 3 × 10−2, 10−2}. Note that for all MTTARTAN experiments we use equalized task weights 1|Taux|+1 . A small grid-search over a handful of weight configurations did not yield significant improvement over the uniform task weighting. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments.
As mentioned in section 3.3, we train as separate meta-classification head, φ∗, to estimate the validation meta-gradients. To estimate φ∗, we use batch sizes of {16, 32} samples from T ∗’s train set. We regularize the meta-head with l2 weight decay and set the decay constant to 0.1. We use a learning rate 10−3 to learn the meta-head. We stop training φ∗ after 10 gradient descent steps.
5 RESULTS AND DISCUSSION
In this section, we will discuss the results of comparing our models against DAPT and TAPT baselines.2 Broadly, we demonstrate the effectiveness of end-task awareness as improving both performance and data-efficiency.
2Our results are slightly different from those presented in Table 5 of Gururangan et al. (2020) in terms of absolute values but the trends observed there still hold here. We attribute these differences to (1) minor implementation differences, and (2) averaging performance over ten seeds instead of five as used in the original paper in order to more strongly establish statistical significance. We observe slightly lower performance on ACL-ARC and SCIERC tasks due to these changes and higher performance on CHEMPROT.
Domain Task RoBERTa TAPT MT-TARTAN META-TARTAN
5.1 END-TASK AWARENESS IMPROVES OVER TASK-AGNOSTIC PRE-TRAINING
Table 1 compares TAPT to its end-task aware variants. As in Gururangan et al. (2020), we observe that performing task adaptive pre-training improves upon just fine-tuning RoBERTa. However, note that introducing the end-task by multi-tasking with the TAPT MLM objective leads to a significant improvement in performance. This improvement is consistent across the 3 tasks we evaluate against. We find that both MT-TARTAN and META-TARTAN achieve similar results in this setting.
5.2 END-TASK AWARENESS IMPROVES DATA-EFFICIENCY
Gururangan et al. (2020) train DAPT on large amounts of in-domain data to achieve results competitive with TAPT. They use 7.55 billion tokens for the BIOMED domain and 8.10 billion for the CS domain. This is on average over 104× the size of the training data of our end-tasks of interest. The large amount of data required to train a competitive DAPT model represents a significant computational burden to the every-day practitioner. This begets the question: are such large amounts of auxiliary data necessary for achieving good downstream performance? To answer this, we train DAPT and its TARTAN version on variable amounts of data for both SCIERC and ACL-ARC tasks.
TARTAN is more data-efficient than DAPT In Figure 2, we focus on training on a small fraction of available domain data n = {100, 101}× |Train| for the DAPT auxiliary task. Full domain data is n′ ≈ 104×|Train|. This relatively low auxiliary data regime represents a realistic setting that is akin to those encountered by everyday practitioners who are likely to be computationally constrained. As can be seen in Figure 2, on the ACL-ARC task, META-TARTAN matches the performance of DAPT when the sizes of the domain data and end-task data are of the same order (100). At this data size, META-TARTAN supersedes DAPT on the SCIERC task. When trained on 10×more auxiliary data, META-TARTAN supersedes DAPT in performance on both tasks. On the ACL-ARC task, METATARTAN achieves 71.194.88, which is close to DAPT’s performance of 72.493.28 using more than 103× auxiliary data. These results indicate that end-task awareness can improve data-efficiency and in this case, improvements are on the order of 1000×.
Domain Task DAPT DAPT+TAPT MT-TARTAN META-TARTAN
TARTAN is more data-efficient than DAPT+TAPT Table 2 compares DAPT and DAPT+TAPT (DAPT followed by TAPT) to *-TARTAN which multi-task DAPT, TAPT and the end-task. MTTARTAN and META-TARTAN significantly outperform DAPT and DAPT+TAPT in 2 of the tasks whilst giving higher average performance in the ACL-ARC task. We thus conclude that end-task awareness allows us to get a greater performance boost out of the same amount of data.
We explore the data efficiency of TARTAN methods even further by comparing the relatively data-poor versions of MT-TARTAN and META-TARTAN above (n = 10 × |Train|) to the DAPT and DAPT+TAPT variants trained on all the available domain data (n′ ≈ 104 × |Train|). We can see from Table 3 that for the CS domain, our end-task aware variants come close to (ACL-ARC) and even supersede (SCIERC) the end-task agnostic variants though trained with ≈ 1000× less data. For BIOMED domain (CHEMPROT task), increasing the amount of data drastically improves the performance of end-task agnostic variants compared to MT-TARTAN and META-TARTAN trained on much less data.
Zhang et al. (2020) show that different tasks exhibit sigmoid-like curves in terms of how much pretraining data is required to achieve good results before performance levels off. We contextualize Tables 2 and 3 within said work and posit that the CHEMPROT task intrinsically requires much more data (compared to our other tasks) before performance begins to improve appreciably.
5.3 META-TARTAN MORE EFFECTIVELY UTILIZES OUT-OF-DISTRIBUTION AUXILIARY
DATA OVER MT-TARTAN
TARTAN ACL-ARC SCIERC CHEMPROT MT 69.270.96 81.530.99 80.263.79 META 71.194.88 82.081.19 82.310.75
heterogeneous domain data whose impact on the end-task performance is less clear. Notice from Table 4 that when required to rely solely on domain data for auxiliary tasking, META-TARTAN improves performance over MT-TARTAN. We attribute META-TARTAN’s improvement over MTTARTAN to its ability to more flexibly adapt to incoming data of variable utility to the end-task.
5.4 TASK WEIGHTING STRATEGIES DISCOVERED BY META-LEARNING
To illustrate the importance of the separate classification head φ∗ for computing the meta-signal for the task weights (described in Section 3.3), we run META-TARTAN experiments with ACL-ARC as the end-task and DAPT as the auxiliary task. We compare using either a separate (φ∗) or the same (φ′) classification head for calculating the meta-gradient. Figure 3 plots the task weightings learned in each setting during training. We can clearly see that using a separate head counteracts the pathological solution of down-weighting all tasks that are not T ∗ and as a result, improves performance: a delta of 1.7 F1 points in this case. The strategy discovered by META-TARTAN presents an interesting contrast to classical pre-training: whilst the initial phase of classical pre-training involves
solely the auxiliary task, early in training, META-TARTAN up-weights the auxiliary task but does not fully zero out the end-task. Later in training, we see leveling off of weights instead of railing the end-task to 1 as in classical pre-training.
Next, we plot a similar graph for using both DAPT and TAPT across our three tasks in Figure 4. From the figure, it is apparent that META-TARTAN discovers similar task-weighting strategies across different end-tasks. This suggests that the MLM objective and META-TARTAN’s strategy for learning task weights are generic enough to induce similar behaviours across tasks. In general, DAPT is significantly up-weighted compared to the end-task and TAPT. Note that the TAPT + ACL-ARC task weights (Figure 4) has the same approximate trajectory as ACL-ARC task weight in Figure 3. It seems important to assign high weight to the task data (Figure 3) but not necessarily all of it needs to go to the actual task loss (Figure 4). We hypothesize that the diversity in the domain data counteracts overfitting to the end-task data and results in DAPT being up-weighted.
6 RELATED WORK
Multi-task learning can be traced back to seminal work by Caruana (1995), Caruana (1997), and has since been the subject of a flourishing literature, recent surveys of which can be found in Ruder (2017) or Zhang & Yang (2021). In NLP, while initial work from Collobert & Weston (2008) already showed the benefits of multi-task learning, it has only recently become a central topic in the field, with the advent of multi-task benchmarks (Wang et al., 2018b; McCann et al., 2018).
Pre-training is where a machine learning model is first trained on a generic, data-rich task before being fine-tuned on an end-task. In NLP this practice dates back to the use of pre-trained word embeddings (Turian et al., 2010; Mikolov et al., 2013) and later pre-trained encoders (Kiros et al., 2015; Dai & Le, 2015). Peters et al. (2018) and Howard & Ruder (2018) heralded a renaissance of pre-training before BERT (Devlin et al., 2018) and its many offshoots (Liu et al., 2019; Yang et al., 2019; Lewis et al., 2019) cemented it as the de facto standard for modern NLP.
Meta-learning dates back to early work from Schmidhuber (1995); Thrun (1998). More relevant to our work is gradient-based meta-learning for solving bi-level optimization problems, first popularized by Finn et al. (2017) and followup work (Nichol et al., 2018; Rajeswaran et al., 2019) for few-shot learning. This method has transferred to a variety of applications such as architecture search (Liu et al., 2018) and model poisoning (Kurita et al., 2020).
7 CONCLUSION
We have advocated for a paradigm shift in the way we approach pre-training. We have motivated making pre-training more end-task aware when the end-task is known in advance. Our work introduced two novel end-task aware training algorithms: End-task Aware Training via Multitasking (MT-TARTAN) and End-task Aware Training via Meta-learning (META-TARTAN). In Section 5, we demonstrated the ability of our proposed algorithms to improve performance and dataefficiency over their end-task agnostic counterparts.
This work suggests several promising directions for future work. Instead of learning coarse task level weights, can further performance improvements be achieved via finer-grained example level weighting as in Wang et al. (2020)? Can meta-learning algorithms like META-TARTAN enable more effective utilization of previously discarded (Aroca-Ouellette & Rudzicz, 2020) pre-training auxiliary tasks like Next Sentence Prediction (NSP) (Devlin et al., 2018)? We hope this work spurs conversation around these questions and many more.
8 ACKNOWLEDGEMENTS
This work was supported in part by DSO National Laboratories, an ENS-CFM Data Science Chair, DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies.
9 ETHICS STATEMENT
Our work introduces new algorithms but leverages pre-existing datasets and models. Overall, this work inherits some of the risk of original work upon which it is implemented. Algorithms for continued training such as TAPT and DAPT necessitate per-task training of unsupervised objectives which result in corresponding green-house emissions due to energy consumption (Strubell et al., 2019). However, as shown in Sections 3 and 5, our new compute-efficient algorithms greatly increase the data efficiency of these algorithms, reducing these harms as well as the various harms associated with labor for data-collection (Jo & Gebru, 2020). Also, since our work is set in the context of pre-existing datasets and models (Section 4), we recognize that any ethical issues that have been revealed in these (such as bias (Bender et al., 2021) or privacy leakage (Carlini et al., 2021)) may also propagate to models trained using our work, and mitigation strategies such as Schick et al. (2021); Liang et al. (2021) may be necessary. Finally, there is a potential risk in META-TARTAN that leveraging a validation set for defining the meta-objective could amplifying bias that exists in this data split, although this is done indirectly through task weighting and hence we believe that this risk is small.
10 REPRODUCIBILITY STATEMENT
We pledge to release the source-code for this project to improve the ease of reproducibility of our results by the NLP and machine learning communities. In Section 4, we have specified details about datasets, training regimes and models to allow anyone who wishes to reproduce our results without our original source code to do so. Our discussion of the algorithmic and evaluation details can be found in Appendices A.1, A.3 and A.2. As we noted in 4, we build off of Gururangan et al. (2020)’s implementations which can be found at https://github.com/allenai/dont-stop-pretraining.
A APPENDIX
A.1 JUSTIFYING THE INTRODUCTION OF A META-HEAD
Proof. To arrive at Equation 7 we start with the closed form solution for ∇wiLvalT∗ (θ∗) and then introduce approximations in order to produce Equation 7. First, note that :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T( ∇wiθ∗(w) ) [Chain rule] (8)
To get ∇wiθ∗(w) we invoke the Cauchy Implicit Function Theorem (IFT) as with Lorraine et al. (2020); Navon et al. (2020); Liao et al. (2018):
∇wiθ∗(w) = [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θLtotal(θ∗(w)) ] [IFT]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θ ( w∗LT∗(θ∗(w)) + ∑ Ti∈Taux wiLTi(θ∗(w)) )]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ] [Only terms with wi survive]
Bringing it all together, we get :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T([ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ]) (9)
Computing ∇wiLvalT∗ (θ∗) from Equation 9 is computationally unwieldy since we would not only have to optimize θ to convergence for every step of wi but we would also have to invert the Hessian of a typically large model. Our middle ground between Equations 9 and 6 (Equation 7) makes use of the following approximations:
• We approximate the inverse Hessian with the identity. This approximation is not new; we follow previous work like Lorraine et al. (2020)(Table 3) who explore the use of this approximation because of computational efficiency.[
∇2θLtotal(θ∗(w)) ]−1
= lim i→∞ i∑ j=0 ( I−∇2θLtotal(θ∗(w)) )j ≈ I
We are assuming the contribution of terms with i > 0 are negligible.
• Instead of training the whole network to convergence, at each time-step, we fix the body of the network and train a special head φ∗ to convergence on a small batch of end-task training data. We then use [θbody;φ∗] as a proxy for θ∗. This is a computationally feasible workaround to training all of θ to convergence to get a single step gradient estimate. Especially in the continued pre-training setting where a pre-trained generalist model like BERT is used as θbody, this approximation is reasonable. To our knowledge, we are the first to suggest this approximation.
∇θLvalT∗ (θ∗)→ ∇θLvalT∗ ([θbody;φ∗])
• Above, we have approximated θ∗ = [θbody;φ∗]. Since φ∗ is only used to evaluate end-task (T ∗) validation data, it means θ remains unchanged with respect to the training data for task Ti. Thus ∇θLTi([θbody; ( φ∗, . . . , φi ) ]) = ∇θLTi([θbody;φi]) = ∇θLTi(θ)
Bringing it all together, we get Equation 7, repeated here:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t))
A.2 CALCULATING P-VALUES FROM PERMUTATION TEST
We used the permutation test (Good, 2005; Dror et al., 2018) to test for statistical significance. For each test, we generate 10000 permutations to calculate significance level. This is sufficient to converge to a stable p-value without being a computational burden. We chose this over the common student t-test because :
1. We have only 10 runs per algorithm and permutation tests are more robust at low sample size
2. Permutation test is assumption free. Student t-tests assume that the samples are normally distributed
3. Permutation test is robust to variance in the samples, so even though error-bars can overlap, we still establish significant differences in the samples. Variance in our results is expected due to small dataset sizes of end-tasks.
A.3 ALGORITHM FOR META-TARTAN
Algorithm 1: End-task Aware Training via Meta-learning (META-TARTAN) Require: T ∗,Taux: End-task, Set of auxiliary pre-training tasks Require: η, β1, β2: Step size hyper-parameters Initialize :
Pre-trained RoBERTa as shared network body, θbody Task weightings: w∗, wi = 1|Taux|+1
Randomly initialize : end-task head as φ′ meta head for end-task as φ∗ task head, φi, for each Ti ∈ Taux while not done do B∗tr ∼ T ∗train // Sample a batch from end-task
g∗θ , g ∗ φ ← [ ∇θ,∇φ′ ]( LT∗(θ, φ′, B∗tr) ) // Get end-task grads
giθ, g i φ ← [ ∇θ,∇φi ]( LTi(θ, φi, Bi) ) // Get task grads. ∀i ∈ [n], Bi ∼ Ti
// Learn a new meta head φ∗ ← estimate meta head(B∗tr, β2, θ, φ∗) // B∗tr ∼ T ∗train g∗meta ← ∇θLT∗(θ, φ∗, B∗val) // B∗val ∼ T ∗val // Update task weightings w∗ ← w∗ + η cos(g∗meta, g∗θ) wi ← wi + η cos(g∗meta, giθ) // Update task parameters α∗, α1, . . . , α|Taux| = softmax(w ∗, w1, . . . , w|Taux|)
Update θbody ← θbody − β1 ( α∗g∗θ + ∑ i αig i θ ) Update ( φi ← φi − β2giφ ) , ( φ′ ← φ′ − β2g∗φ
) end Result : θ, φ′
A.4 VISION EXPERIMENTS
We validate that the gains from end-task Aware Training are not siloed to only learning from text. We conduct an experiment comparing end-task aware training on images to its end-task agnostic variant. We use the Cifar100 dataset (Krizhevsky et al., 2009). We use the Medium-Sized Mammals superclass (one of the 20 coarse labels) as our main task whilst the other 19 super classes are used as auxiliary data. Our primary task is thus a 5-way classification task of images different types of
medium-sized mammals whilst whilst the remaining 95 classes are grouped into a single auxiliary task.
As can be seen from Table 5, being end-task aware improves over task agnostic pre-training. We find that, again, when our auxiliary task consist of solely domain data and no task data, METATARTAN performs better than MT-TARTAN (as measured by averaged performance).
A.5 FULL TAPT TABLE WITH SIGNIFICANCE LEVELS
We repeat Table 1 and provide details about levels of statistical signifance.
Task TAPT MT-TARTAN p−values META-TARTAN p−values
Task TAPT META-TARTAN p−values
A.6 FULL DAPT/DAPT+TAPT TABLE
We repeat Table 3 and provide details about levels of statistical signifance.
A.7 FAQ
1. What settings are TARTAN algorithms designed for? TARTAN algorithms specialize auxiliary objectives to a particular end-task. This comes at a risk of losing the generic representations afforded by generalist pre-trained models. Thus if a practitioner has a sufficiently important end-task where obtaining improved end-task performance is paramount over generic representations, then TARTAN is a viable option.
2. When do we get computational savings from META-TARTAN? MT-TARTAN does not add any extra overhead compared to pre-train then fine-tune approaches. META-TARTAN however, adds extra overhead per gradient descent step due to computing meta-gradients. However, as shown in Section 5 we are able to get several orders of magnitude improvement in data-efficiency from applying the method. In general,
for the tasks we experimented with, we find that the savings in data-efficiency superseded the extra per-timestep meta-learning overhead.
3. When should we use META-TARTAN over MT-TARTAN? In +TAPT settings (Tables 1, 3), we observe that META-TARTAN and MT-TARTAN perform similarly. We attribute this to the strength of TAPT-MLM objective. We were pleasantly surprised that the two methods performed comparatively in this setting but in hindsight, we appreciate the insight that went into designing TAPT-MLM as an objective which makes it a strong baseline. In other settings with less carefully designed auxiliary objectives and data (which can potentially be detrimental to the end-task) we expect METATARTAN to perform better. Section 5.3 provides evidence of this. | 1. What is the main contribution of the paper regarding pretraining and finetuning paradigms?
2. What are the strengths of the paper, particularly in its formulations and experiments?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers the common setup of pretraining and finetuning paradigm and argues that an end-task aware setting, either by multitasking or an additional meta learning that learns the weights between various auxiliary tasks and the end task, is superior than end-task agnostic pretraining.
The paper provides formal formulations of the various set-ups. Propose an interesting strategy for the meta-learning setup (Section 3.3).
Review
Strengths:
Formally formularize different configurations of pre-training, finetuning, end-task-aware multitasking or meta learning setups. These formulations are clean and make the difference between the various setup clear.The paper is well-written and easy to understand.
The set-up is clean and the experiments are done thoroughly. There are several interesting observations. For example, the observation of task weighting strategies over time from the meta-learning setup is interesting. It shows that in the beginning the end-task is included but with a smaller weight while in the later phase the end-task is upweighted but the auxiliary tasks still receives small weights, instead of being all zeros.
Weaknesses:
Intuitively, it is not surprising that the proposed method performs better than other end-task agnostic pretraining. While the paper argues that the end-task aware setting, when done correctly, will save computational power, it does not consider the comparisons when there are multiple end-tasks involved. It is understood that a shared pretrained model has the advantage of quick iterations on various end-tasks. When there are a large number of end-tasks involved, it become daunting to train/pre-train large models for each end-task.
Though I also acknowledge that the end-task aware approach proposed by the paper is a good middle ground when there is a specific end task and medium amount of resources are available, which makes training useful large models more accessible, since the end-task aware method are more data-efficient (resource-efficient.) |
ICLR | Title
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Abstract
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
N/A
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
1 INTRODUCTION
The increasingly popular pre-training paradigm (Dai & Le, 2015; Devlin et al., 2018; Gururangan et al., 2020) involves first training a generalist model on copious amounts of easy-to-obtain data, e.g. raw text data in NLP, and then using this model to initialize training on a wide swath of downstream tasks. Generalist models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have a strong appeal; a few institutions with significant resources incur the cost of training these large models whilst the rest of the research community enjoys a significant performance improvement at minimal computational overhead. However, the advantages of initializing a downstream task from a generalist model are not guaranteed. Previous work has shown that the benefits of pre-training depend heavily on the degree of domain overlap between the end-task data and the massive, heterogenous data on
which the generalist model was trained (Beltagy et al., 2019; Gururangan et al., 2020).
Notably, Gururangan et al. (2020) have demonstrated the benefits of continued pre-training of generalist models using data that is similar to that of the end-task. Their approach is formalized into two classes: Domain Adaptive Pre-training (DAPT) and Task Adaptive Pretraining (TAPT) where further stages of pre-training of generalist models are conducted on domain- and task-specific data,
respectively. DAPT and TAPT exploit the fact that we often know the end-task beforehand, and so we can make specific choices about our pre-training regimen to improve end-task performance.
However, in both pre-training for generalist models and continued pre-training, the training procedure itself does not explicitly incorporate the end-task objective function. Because of this, practitioners have to be careful with their choice of auxiliary tasks, the order in which they are trained on, and the early-stopping criteria for each pre-training stage so as to actually achieve good downstream end-task performance (Gururangan et al., 2020; Dery et al., 2021). In the absence of principled criteria to make these difficult design choices, it is common to instead resort to the computationally demanding heuristic of pre-training on as much data as possible for as long as possible.
In this paper, we raise the following question: “In settings where we have a particular end-task in mind, should we be pre-training at all?”. We define pre-training as any form of task-agnostic training that a model undergoes before it is finally fine-tuned on the end-task of interest. As a first milestone in addressing the larger question posed above, we explore the ubiquitous continued pretraining setting (Gururangan et al., 2020; Aghajanyan et al., 2021). Specifically, our paper questions the wisdom of having disjoint further pre-training then fine-tuning steps on a generalist model. In response, we advocate for an alternative approach in which we directly introduce the end-task objective of interest into the learning process. This results in a suite of end-task aware methods called TARTAN (end-Task AwaRe TrAiniNg). Our formulations incorporate both unsupervised auxiliary objectives traditionally used in NLP pre-training (such as masked language modeling as in Devlin et al. (2018)) and the end-task objective, followed by an optional fine-tuning step on the end-task. We motivate TARTAN experimentally in the continued pre-training setting and based on this, we make the following contributions to the literature on leveraging auxiliary tasks and data:
• In lieu of standard end-task agnostic continued pre-training, we suggest introducing the end-task objective into the training process via multi-task learning (Caruana, 1997; Ruder, 2017). We call this procedure Multi-Tasking end-Task AwaRe TrAiniNg (MT-TARTAN) (Section 3.1). MTTARTAN is a simple yet surprisingly effective alternative to task-agnostic pre-training. In Section 5, we demonstrate that MT-TARTAN significantly improves performance and data efficiency over Gururangan et al. (2020)’s results. It also obviates the need for fickle hyper-parameter tuning through direct optimization of validation performance. • To allow more fine-grained control of the end-task over the auxiliary tasks, in Section 3.2, we present an online meta-learning algorithm that learns adaptive multi-task weights with the aim of improving final end-task performance. Our META-learning end-Task AwaRe TrAiniNg (METATARTAN) allows us to robustly modulate between multiple objectives and further improves performance over MT-TARTAN . • A naive implementation of META-TARTAN based on first-order meta-learning analysis results in a sub-optimal algorithm that ignores all tasks except the end-task. We trace this problem to the use of a single model training head for computing both the end-task training loss and meta-objective (end-task validation loss). To guard against this pathological solution, we introduce a separate model head for computing the meta-objective. In Section 3.3, we justify this simple-to-implement fix and validate its practical efficacy in Section 5.
Our results suggest that TARTAN may be an attractive alternative to the continued pre-training paradigm, and further research into the place of pre-training in end-task aware settings is warranted.
2 FORMALIZING PRE-TRAINING AND CONTINUED PRE-TRAINING
Consider a dataset D = {(xi, yi)i∈[m]} consisting of m labelled examples. We define a task as an objective function and dataset pair: T = {L(·), D}. Mθ is a model parameterized by θ. The objective function L(yi,Mθ(xi)) evaluates how well a model prediction Mθ(xi) fits the true label yi, such as cross-entropy loss in the case of classification. Note that the task dataset, D, is typically decomposed into the sets (Dtrain, Dval, Dtest). Dtrain is the set of examples used for model training whilst Dtest is used for final task evaluation. The validation set, Dval, is typically used for model selection but it is also frequently used in meta-learning to define the meta-objective – Lval. Given a specific end-task T ∗, our aim is to improve performance on T ∗ (as measured by the model loss on DtestT∗ ) by leveraging auxiliary tasks Taux = {T1, . . . , Tn}. Note that we do not particularly
care about the performance of any of the tasks in Taux. We are willing to sacrifice performance on Taux if it improves performance on T ∗.
From the perspective of model architecture, there are several ways to leverage Taux. We focus on the simple but widely-used parameter sharing setting. Here, all tasks share a model body θbody but each task Ti has its own head φi for prediction. We denote the head belonging to T ∗ as φ′. Thus θ = [ θbody; ( φ1, . . . , φn, φ′ )] and θbody is reusable across new tasks.
2.1 PRE-TRAINING
Pre-training is when a model is first trained on Taux before performing a final fine-tuning phase on T ∗. The motivation behind pre-training is that learning Taux first hopefully captures relevant information that can be utilized during training of T ∗. This desire has led to the proliferation of generalist pre-trained models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and GPT3 (Brown et al., 2020) that have been trained on copious amounts of data. Generalist models have been widely successful at improving downstream task performance when used as initilization.
We can formalize the pre-training procedure as follows:
θ0 = argminθ ( ∑ Ti∈Taux LTi(θ) )
(1)
In Equation 1, we seek a point θ0 that achieves minimal loss on the tasks in Taux. We hope that θ0 will be a good starting point for gradient descent on T ∗. Let g(θ0) represent the set of end-points of stochastic gradient descent on an initialization, θ0. Stochastic gradient descent from the same initialization can produce different end-points due to differences in hyper-parameters like learning rate, batch size and order, as well as regularization strength. We can write the fine-tuning phase as:
θ∗ = argmin{θ ∈ g(θ0)} LT∗(θ) (2)
Note that pre-training is end-task agnostic: the pre-training Equation 1 occurs entirely before training on the end-task Equation 2, and does not explicitly incorporate the end-task objective, T ∗. Since there is no awareness of the end-task during pre-training it is important to carefully choose Taux so that pre-training actually results in improved performance on T ∗ (Wang et al., 2018a). For text data, past work has found left-to-right language modeling (Peters et al., 2017) and masked language modeling (MLM) (Devlin et al., 2018) to be good choices to include in Taux.
2.2 CONTINUED PRE-TRAINING
Recent work (Beltagy et al., 2019; Gururangan et al., 2020; Lee et al., 2020) showed that downstream performance on T ∗ can be improved by further adapting generalist models via continued pre-training on a more relevant set of auxiliary tasks. This is equivalent to sequentially performing multiple steps of Equation 1, with different Taux, before finally performing Equation 2 on T ∗. Domain and Task Adaptive Pre-training Gururangan et al. (2020) present Domain Adaptive PreTraining (DAPT) and Task Adaptive Pre-Training (TAPT) as methods for continued pre-training. During DAPT, a generalist model is further pre-trained on an unsupervised objective with large amounts of data from the same domain as the end-task. TAPT also pre-trains with the same unsupervised objective as DAPT, but on the actual dataset of the end-task. Gururangan et al. (2020) find that performance can be further improved by chaining objectives, DAPT first, followed by TAPT.
Though TAPT and DAPT do not directly incorporate the end-task objective during training, it still indirectly informs both the choice of pre-training data and the order in which the pre-training tasks are trained on. Below, we explore stronger versions of this influence.
3 END-TASK AWARE TRAINING (TARTAN)
In this section, we argue for the end-task to be added directly into the training process to create explicit interactions between T ∗ and Taux.
3.1 END-TASK AWARE TRAINING VIA MULTI-TASKING (MT-TARTAN)
We propose to directly incorporate knowledge of the end-task by multi-tasking T ∗ together with Taux, before optionally fine-tuning on T ∗ exclusively. To this end, we introduce a set of task weights w = (w∗, w1, · · · , w|Taux|) satisfying w∗ + ∑ i wi = 1, to modulate between the different losses. Our new formulation is:
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ i wiLTi(θ) )
(3)
Here, Equation 3 replaces Equation 1 and can be followed by the optional fine-tuning stage of Equation 2. Note that this formulation fixes the tasks weights w throughout the training process. We call this formulation End-task Aware Training via Multi-tasking (MT-TARTAN) because we introduce the end-task directly into the training procedure, and do so by multi-tasking it with Taux.
MT-TARTAN allows us to prioritize performance on T ∗ in several ways. First, we can weight the end-task higher than all the other auxiliary tasks. Also, during training, we can monitor LT∗ on the end-task validation set and early stop when it plateaus; even if the auxiliary tasks have not yet converged. This is not possible during standard pre-training because we do not train T ∗ and so it performs at random before we actually start fine-tuning. Early stopping on T ∗ can represent significant computational savings over end-task agnostic pre-training when the savings in data-efficiency supercede the extra overhead of end-task aware gradient descent steps.
3.2 END-TASK AWARE TRAINING VIA META-LEARNING (META-TARTAN)
MT-TARTAN, DAPT and TAPT, all share the same drawback: they implicitly assume that the auxiliary tasks have static importance to the end-task over the lifetime of its training, either by being end-task agnostic (DAPT and TAPT) or by having static task weights (MT-TARTAN). With MTTARTAN, an additional drawback noted by Wang et al. (2019); Yu et al. (2020) is that multi-tasking can negatively impact task performance compared to isolated training. These shortcomings motivate the formulation of an adaptive algorithm that can mitigate the negative influence of some tasks whilst responding to the changing relevance of auxiliary tasks over the lifetime of end-task training.
As they stand, the pre-training equation pair (Equations 1, 2) and the MT-TARTAN pair (Equations 2, 3) are decoupled. The inner-level variables of the pre-training phase do not depend on the outerlevel variables of the fine-tuning phase. Thus the equation pairs are typically solved sequentially. We propose to tightly couple Equations 2 and 3 by formulating jointly learning w and θ0 as a bi-level optimization problem. A bi-level formulation allows us to leverage meta-learning (Schmidhuber, 1995) techniques to learn adaptive task weights which capture variable auxiliary task importance whilst mitigating the contribution of harmful tasks. We propose a meta-learning algorithm in the mold of Model Agnostic Meta-Learning (MAML) (Finn et al., 2017) to learn task weights. As a bi-level problem, this can be formulated as :
θ∗,w∗ = argmin{θ ∈ g(θ0), w} LT∗(θ) (4) where
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ Ti∈Taux wiLTi(θ) )
(5)
We want to jointly learn w, with θ0, such that taking a gradient descent step modulated by w leads to improvement in end-task generalization. We use performance on the end-task validation set (DvalT∗ ) as a meta-objective to train w. Performance on DvalT∗ serves as a stand-in for end-task generalization performance whilst also naturally capturing the asymmetrical importance of T ∗.
Our joint descent algorithm proceeds as follows. At each timestep t, we hold the task weights fixed and update θt based on ∇θLtotal(θt,w). We then proceed to update w via gradient descent on the end-task validation loss at θt+1. For this, we derive an approximation for∇wLvalT∗ (θt+1,w) below:
LvalT∗ (θt+1(w)) = LvalT∗ ( θt − β ( w∗∇LT∗ + ∑ i wi∇LTi ))
≈ LvalT∗ (θt)− β ( w∗∇LT∗ + ∑ i wi∇LTi )T ∇LvalT∗ (θt)
We can take the gradient of the above first-order approximation w.r.t an individual weight wi. This tells us how to update wi to improve the meta-objective.
∂LvalT∗ (θt+1(w)) ∂wi
≈ −β ( ∇LTi )T (∇LvalT∗ (θt)) = −β(∇LTi)T (∇LvalT∗ ([θbody, φ′]t)) (6) In Equation 6, we explicitly specify [ θbody, φ ′] t
because computing losses on T ∗ depend on only these parameters. LTi depends solely on [ θbody, φ i ] t but we leave this out to avoid notation clutter.
Our analysis above is similar to that of Lin et al. (2019) with one key difference: we learn a weighting for the main task w∗ too. This ability to directly modulate T ∗ allows us to capture the fact that at certain stages in training, auxiliary tasks may have greater impact on end-task generalization than the end-task’s own training data. This choice also allows us to control for over-fitting and the influence of bad (mislabelled or noisy) training data.
3.3 INTRODUCING A SEPARATE CLASSIFICATION HEAD FOR META-LEARNING
Observe that from Equation 6, updates forw 6= w∗ involve gradients computed from different model heads φi and φ′ whilst for w∗, we are taking the dot product of gradients from the same end-task head φ′. As we will show empirically in Section 5.4, computing weight updates this way creates a strong bias towards the primary task, causing w∗ to rail towards 1 whilst the other weights dampen to 0, which may be sub-optimal in the long run.
Intuitively, this short-horizon (greedy) (Wu et al., 2018) behavior makes sense: the quickest way to make short-term progress (improve LvalT∗ (θt+1)) is to descend solely on T ∗. More formally, the greedy approach arises because we derive ∇wiLvalT∗ (θt+1) in Equation 6 as a proxy for the gradient at θ∗, the outer-loop end-point in Equation 4. Variations of this substitution are common in the meta-learning literature (Finn et al., 2017; Liu et al., 2018; Nichol et al., 2018) because it is computationally infeasible to train a model to convergence every time we wish to compute∇wiLvalT∗ (θ∗).
To remedy the greedy solution, instead of estimating ∇θLT∗ and ∇θLvalT∗ from the same classification head (Equation 6), we introduce a special head φ∗ for computing the meta-objective. Specifically, instead of trying to compute θ∗, we approximate it by fixing the body of the network θbody and training the randomly initialized head φ∗ to convergence on a subset of the end-task training data. We do this every time we wish to estimate∇wiLvalT∗ (θ∗). Introducing φ∗ eliminates the strong positive bias on w∗ and enables us to compute a better proxy for the meta-gradient at θ∗:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t)) (7) Equation 7 represents a simple-to-implement alternative to Equation 6. We provide a more detailed justification for Equation 7 in Appendix A.1. In Section 5.4, we empirically validate that the transition from Equation 6 to 7 improves performance whilst mitigating pathological solutions. Our approach of creating φ∗ for approximating the meta-objective (down-stream validation performance) is inspired by Metz et al. (2018), who use a similar technique to construct a meta-objective for evaluating the quality of unsupervised representations.
Please see Algorithm 1 in Appendix A.3 for details about META-TARTAN.
4 EXPERIMENTAL SETUP
Setting1 Though our algorithms and methodology can be directly applied to both continued pretraining (Section 2.2) and pre-training from scratch (Section 2.1) of generalist models, we focus on the former scenario. This is because the continued pre-training setting is more common amongst everyday practitioners as it is less computationally demanding. It thus lends itself more easily to exploration under a realistic computational budget. In Appendix A.4, we show that end-task aware training from scratch is viable by studying a simple computer vision setting. Concurrent work by Yao et al. (2021) shows that from-scratch end-task aware training for NLP problems is viable.
1Code will be released at https://github.com/ldery/TARTAN
In keeping with previous work (Devlin et al., 2018; Gururangan et al., 2020), we focus on Taux as a set of MLM tasks on varied datasets. In the case of DAPT and our end-task aware variants of it, Taux is an MLM task with data from the domain of the end-task. For TAPT, Taux is an MLM task with data from the end-task itself. DAPT, TAPT and DAPT+TAPT (chained pre-training with DAPT followed by TAPT) will serve as our baseline continued pre-training approaches. We will compare these baselines to their end-task aware variants that use MT-TARTAN and META-TARTAN.
Datasets Our experiments focus on two domains: computer science (CS) papers and biomedical (BIOMED) papers. We follow Gururangan et al. (2020) and build our CS and BIOMED domain data from the S2ORC dataset (Lo et al., 2019). We extract 1.49M full text articles to construct our CS corpus and 2.71M for our BIOMED corpus. Under both domains, our end-tasks are low-resource classification tasks. Using low-resource tasks allows us to explore a setting where pre-training can have a significant impact. Under the CS domain, we consider two tasks: ACL-ARC (Jurgens et al., 2018) and SCIERC (Luan et al., 2018). ACL-ARC is a 6-way citation intent classification task with 1688 labelled training examples. For SCIERC, the task is to classify the relations between entities in scientific articles. This task has 3219 labelled examples as training data. We choose CHEMPROT (Kringelum et al., 2016) as the classification task from the BIOMED domain. This task has 4169 labelled training examples and the goal is to classify chemical-protein interactions. More details of these datasets can be found in Table 2 of Gururangan et al. (2020). Gururangan et al. (2020) evaluate against all 3 tasks and their available code served as a basis on which we built MT-TARTAN and META-TARTAN.
Model Details We use a pre-trained RoBERTabase (Liu et al., 2019) as the shared model base and implement each task as a separate multi-layer perceptron (MLP) head on top of this pre-trained base. As in Devlin et al. (2018), we pass the [CLS] token embedding from RoBERTabase to the MLP for classification.
Training Details For DAPT and TAPT, we download the available pre-trained model bases provided by Gururangan et al. (2020). To train thier corresponding classification heads, we follow the experimental setup described in Appendix B of Gururangan et al. (2020).
Performing end-task aware training introduces a few extra hyper-parameters. We fix the other hyperparameters to those used in Gururangan et al. (2020). MT-TARTAN and META-TARTAN introduce joint training of a classification head for the end-task T ∗. We experiment with batch sizes of 128, 256 and 512 for training this head. We try out learning rates in the set {10−3, 10−4, 10−5} and drop out rates of {0.1, 0.3}. For META-TARTAN since we are now learning the task weights, w, we test out task weight learning rates in {10−1, 5 × 10−2, 3 × 10−2, 10−2}. Note that for all MTTARTAN experiments we use equalized task weights 1|Taux|+1 . A small grid-search over a handful of weight configurations did not yield significant improvement over the uniform task weighting. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments.
As mentioned in section 3.3, we train as separate meta-classification head, φ∗, to estimate the validation meta-gradients. To estimate φ∗, we use batch sizes of {16, 32} samples from T ∗’s train set. We regularize the meta-head with l2 weight decay and set the decay constant to 0.1. We use a learning rate 10−3 to learn the meta-head. We stop training φ∗ after 10 gradient descent steps.
5 RESULTS AND DISCUSSION
In this section, we will discuss the results of comparing our models against DAPT and TAPT baselines.2 Broadly, we demonstrate the effectiveness of end-task awareness as improving both performance and data-efficiency.
2Our results are slightly different from those presented in Table 5 of Gururangan et al. (2020) in terms of absolute values but the trends observed there still hold here. We attribute these differences to (1) minor implementation differences, and (2) averaging performance over ten seeds instead of five as used in the original paper in order to more strongly establish statistical significance. We observe slightly lower performance on ACL-ARC and SCIERC tasks due to these changes and higher performance on CHEMPROT.
Domain Task RoBERTa TAPT MT-TARTAN META-TARTAN
5.1 END-TASK AWARENESS IMPROVES OVER TASK-AGNOSTIC PRE-TRAINING
Table 1 compares TAPT to its end-task aware variants. As in Gururangan et al. (2020), we observe that performing task adaptive pre-training improves upon just fine-tuning RoBERTa. However, note that introducing the end-task by multi-tasking with the TAPT MLM objective leads to a significant improvement in performance. This improvement is consistent across the 3 tasks we evaluate against. We find that both MT-TARTAN and META-TARTAN achieve similar results in this setting.
5.2 END-TASK AWARENESS IMPROVES DATA-EFFICIENCY
Gururangan et al. (2020) train DAPT on large amounts of in-domain data to achieve results competitive with TAPT. They use 7.55 billion tokens for the BIOMED domain and 8.10 billion for the CS domain. This is on average over 104× the size of the training data of our end-tasks of interest. The large amount of data required to train a competitive DAPT model represents a significant computational burden to the every-day practitioner. This begets the question: are such large amounts of auxiliary data necessary for achieving good downstream performance? To answer this, we train DAPT and its TARTAN version on variable amounts of data for both SCIERC and ACL-ARC tasks.
TARTAN is more data-efficient than DAPT In Figure 2, we focus on training on a small fraction of available domain data n = {100, 101}× |Train| for the DAPT auxiliary task. Full domain data is n′ ≈ 104×|Train|. This relatively low auxiliary data regime represents a realistic setting that is akin to those encountered by everyday practitioners who are likely to be computationally constrained. As can be seen in Figure 2, on the ACL-ARC task, META-TARTAN matches the performance of DAPT when the sizes of the domain data and end-task data are of the same order (100). At this data size, META-TARTAN supersedes DAPT on the SCIERC task. When trained on 10×more auxiliary data, META-TARTAN supersedes DAPT in performance on both tasks. On the ACL-ARC task, METATARTAN achieves 71.194.88, which is close to DAPT’s performance of 72.493.28 using more than 103× auxiliary data. These results indicate that end-task awareness can improve data-efficiency and in this case, improvements are on the order of 1000×.
Domain Task DAPT DAPT+TAPT MT-TARTAN META-TARTAN
TARTAN is more data-efficient than DAPT+TAPT Table 2 compares DAPT and DAPT+TAPT (DAPT followed by TAPT) to *-TARTAN which multi-task DAPT, TAPT and the end-task. MTTARTAN and META-TARTAN significantly outperform DAPT and DAPT+TAPT in 2 of the tasks whilst giving higher average performance in the ACL-ARC task. We thus conclude that end-task awareness allows us to get a greater performance boost out of the same amount of data.
We explore the data efficiency of TARTAN methods even further by comparing the relatively data-poor versions of MT-TARTAN and META-TARTAN above (n = 10 × |Train|) to the DAPT and DAPT+TAPT variants trained on all the available domain data (n′ ≈ 104 × |Train|). We can see from Table 3 that for the CS domain, our end-task aware variants come close to (ACL-ARC) and even supersede (SCIERC) the end-task agnostic variants though trained with ≈ 1000× less data. For BIOMED domain (CHEMPROT task), increasing the amount of data drastically improves the performance of end-task agnostic variants compared to MT-TARTAN and META-TARTAN trained on much less data.
Zhang et al. (2020) show that different tasks exhibit sigmoid-like curves in terms of how much pretraining data is required to achieve good results before performance levels off. We contextualize Tables 2 and 3 within said work and posit that the CHEMPROT task intrinsically requires much more data (compared to our other tasks) before performance begins to improve appreciably.
5.3 META-TARTAN MORE EFFECTIVELY UTILIZES OUT-OF-DISTRIBUTION AUXILIARY
DATA OVER MT-TARTAN
TARTAN ACL-ARC SCIERC CHEMPROT MT 69.270.96 81.530.99 80.263.79 META 71.194.88 82.081.19 82.310.75
heterogeneous domain data whose impact on the end-task performance is less clear. Notice from Table 4 that when required to rely solely on domain data for auxiliary tasking, META-TARTAN improves performance over MT-TARTAN. We attribute META-TARTAN’s improvement over MTTARTAN to its ability to more flexibly adapt to incoming data of variable utility to the end-task.
5.4 TASK WEIGHTING STRATEGIES DISCOVERED BY META-LEARNING
To illustrate the importance of the separate classification head φ∗ for computing the meta-signal for the task weights (described in Section 3.3), we run META-TARTAN experiments with ACL-ARC as the end-task and DAPT as the auxiliary task. We compare using either a separate (φ∗) or the same (φ′) classification head for calculating the meta-gradient. Figure 3 plots the task weightings learned in each setting during training. We can clearly see that using a separate head counteracts the pathological solution of down-weighting all tasks that are not T ∗ and as a result, improves performance: a delta of 1.7 F1 points in this case. The strategy discovered by META-TARTAN presents an interesting contrast to classical pre-training: whilst the initial phase of classical pre-training involves
solely the auxiliary task, early in training, META-TARTAN up-weights the auxiliary task but does not fully zero out the end-task. Later in training, we see leveling off of weights instead of railing the end-task to 1 as in classical pre-training.
Next, we plot a similar graph for using both DAPT and TAPT across our three tasks in Figure 4. From the figure, it is apparent that META-TARTAN discovers similar task-weighting strategies across different end-tasks. This suggests that the MLM objective and META-TARTAN’s strategy for learning task weights are generic enough to induce similar behaviours across tasks. In general, DAPT is significantly up-weighted compared to the end-task and TAPT. Note that the TAPT + ACL-ARC task weights (Figure 4) has the same approximate trajectory as ACL-ARC task weight in Figure 3. It seems important to assign high weight to the task data (Figure 3) but not necessarily all of it needs to go to the actual task loss (Figure 4). We hypothesize that the diversity in the domain data counteracts overfitting to the end-task data and results in DAPT being up-weighted.
6 RELATED WORK
Multi-task learning can be traced back to seminal work by Caruana (1995), Caruana (1997), and has since been the subject of a flourishing literature, recent surveys of which can be found in Ruder (2017) or Zhang & Yang (2021). In NLP, while initial work from Collobert & Weston (2008) already showed the benefits of multi-task learning, it has only recently become a central topic in the field, with the advent of multi-task benchmarks (Wang et al., 2018b; McCann et al., 2018).
Pre-training is where a machine learning model is first trained on a generic, data-rich task before being fine-tuned on an end-task. In NLP this practice dates back to the use of pre-trained word embeddings (Turian et al., 2010; Mikolov et al., 2013) and later pre-trained encoders (Kiros et al., 2015; Dai & Le, 2015). Peters et al. (2018) and Howard & Ruder (2018) heralded a renaissance of pre-training before BERT (Devlin et al., 2018) and its many offshoots (Liu et al., 2019; Yang et al., 2019; Lewis et al., 2019) cemented it as the de facto standard for modern NLP.
Meta-learning dates back to early work from Schmidhuber (1995); Thrun (1998). More relevant to our work is gradient-based meta-learning for solving bi-level optimization problems, first popularized by Finn et al. (2017) and followup work (Nichol et al., 2018; Rajeswaran et al., 2019) for few-shot learning. This method has transferred to a variety of applications such as architecture search (Liu et al., 2018) and model poisoning (Kurita et al., 2020).
7 CONCLUSION
We have advocated for a paradigm shift in the way we approach pre-training. We have motivated making pre-training more end-task aware when the end-task is known in advance. Our work introduced two novel end-task aware training algorithms: End-task Aware Training via Multitasking (MT-TARTAN) and End-task Aware Training via Meta-learning (META-TARTAN). In Section 5, we demonstrated the ability of our proposed algorithms to improve performance and dataefficiency over their end-task agnostic counterparts.
This work suggests several promising directions for future work. Instead of learning coarse task level weights, can further performance improvements be achieved via finer-grained example level weighting as in Wang et al. (2020)? Can meta-learning algorithms like META-TARTAN enable more effective utilization of previously discarded (Aroca-Ouellette & Rudzicz, 2020) pre-training auxiliary tasks like Next Sentence Prediction (NSP) (Devlin et al., 2018)? We hope this work spurs conversation around these questions and many more.
8 ACKNOWLEDGEMENTS
This work was supported in part by DSO National Laboratories, an ENS-CFM Data Science Chair, DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies.
9 ETHICS STATEMENT
Our work introduces new algorithms but leverages pre-existing datasets and models. Overall, this work inherits some of the risk of original work upon which it is implemented. Algorithms for continued training such as TAPT and DAPT necessitate per-task training of unsupervised objectives which result in corresponding green-house emissions due to energy consumption (Strubell et al., 2019). However, as shown in Sections 3 and 5, our new compute-efficient algorithms greatly increase the data efficiency of these algorithms, reducing these harms as well as the various harms associated with labor for data-collection (Jo & Gebru, 2020). Also, since our work is set in the context of pre-existing datasets and models (Section 4), we recognize that any ethical issues that have been revealed in these (such as bias (Bender et al., 2021) or privacy leakage (Carlini et al., 2021)) may also propagate to models trained using our work, and mitigation strategies such as Schick et al. (2021); Liang et al. (2021) may be necessary. Finally, there is a potential risk in META-TARTAN that leveraging a validation set for defining the meta-objective could amplifying bias that exists in this data split, although this is done indirectly through task weighting and hence we believe that this risk is small.
10 REPRODUCIBILITY STATEMENT
We pledge to release the source-code for this project to improve the ease of reproducibility of our results by the NLP and machine learning communities. In Section 4, we have specified details about datasets, training regimes and models to allow anyone who wishes to reproduce our results without our original source code to do so. Our discussion of the algorithmic and evaluation details can be found in Appendices A.1, A.3 and A.2. As we noted in 4, we build off of Gururangan et al. (2020)’s implementations which can be found at https://github.com/allenai/dont-stop-pretraining.
A APPENDIX
A.1 JUSTIFYING THE INTRODUCTION OF A META-HEAD
Proof. To arrive at Equation 7 we start with the closed form solution for ∇wiLvalT∗ (θ∗) and then introduce approximations in order to produce Equation 7. First, note that :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T( ∇wiθ∗(w) ) [Chain rule] (8)
To get ∇wiθ∗(w) we invoke the Cauchy Implicit Function Theorem (IFT) as with Lorraine et al. (2020); Navon et al. (2020); Liao et al. (2018):
∇wiθ∗(w) = [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θLtotal(θ∗(w)) ] [IFT]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θ ( w∗LT∗(θ∗(w)) + ∑ Ti∈Taux wiLTi(θ∗(w)) )]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ] [Only terms with wi survive]
Bringing it all together, we get :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T([ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ]) (9)
Computing ∇wiLvalT∗ (θ∗) from Equation 9 is computationally unwieldy since we would not only have to optimize θ to convergence for every step of wi but we would also have to invert the Hessian of a typically large model. Our middle ground between Equations 9 and 6 (Equation 7) makes use of the following approximations:
• We approximate the inverse Hessian with the identity. This approximation is not new; we follow previous work like Lorraine et al. (2020)(Table 3) who explore the use of this approximation because of computational efficiency.[
∇2θLtotal(θ∗(w)) ]−1
= lim i→∞ i∑ j=0 ( I−∇2θLtotal(θ∗(w)) )j ≈ I
We are assuming the contribution of terms with i > 0 are negligible.
• Instead of training the whole network to convergence, at each time-step, we fix the body of the network and train a special head φ∗ to convergence on a small batch of end-task training data. We then use [θbody;φ∗] as a proxy for θ∗. This is a computationally feasible workaround to training all of θ to convergence to get a single step gradient estimate. Especially in the continued pre-training setting where a pre-trained generalist model like BERT is used as θbody, this approximation is reasonable. To our knowledge, we are the first to suggest this approximation.
∇θLvalT∗ (θ∗)→ ∇θLvalT∗ ([θbody;φ∗])
• Above, we have approximated θ∗ = [θbody;φ∗]. Since φ∗ is only used to evaluate end-task (T ∗) validation data, it means θ remains unchanged with respect to the training data for task Ti. Thus ∇θLTi([θbody; ( φ∗, . . . , φi ) ]) = ∇θLTi([θbody;φi]) = ∇θLTi(θ)
Bringing it all together, we get Equation 7, repeated here:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t))
A.2 CALCULATING P-VALUES FROM PERMUTATION TEST
We used the permutation test (Good, 2005; Dror et al., 2018) to test for statistical significance. For each test, we generate 10000 permutations to calculate significance level. This is sufficient to converge to a stable p-value without being a computational burden. We chose this over the common student t-test because :
1. We have only 10 runs per algorithm and permutation tests are more robust at low sample size
2. Permutation test is assumption free. Student t-tests assume that the samples are normally distributed
3. Permutation test is robust to variance in the samples, so even though error-bars can overlap, we still establish significant differences in the samples. Variance in our results is expected due to small dataset sizes of end-tasks.
A.3 ALGORITHM FOR META-TARTAN
Algorithm 1: End-task Aware Training via Meta-learning (META-TARTAN) Require: T ∗,Taux: End-task, Set of auxiliary pre-training tasks Require: η, β1, β2: Step size hyper-parameters Initialize :
Pre-trained RoBERTa as shared network body, θbody Task weightings: w∗, wi = 1|Taux|+1
Randomly initialize : end-task head as φ′ meta head for end-task as φ∗ task head, φi, for each Ti ∈ Taux while not done do B∗tr ∼ T ∗train // Sample a batch from end-task
g∗θ , g ∗ φ ← [ ∇θ,∇φ′ ]( LT∗(θ, φ′, B∗tr) ) // Get end-task grads
giθ, g i φ ← [ ∇θ,∇φi ]( LTi(θ, φi, Bi) ) // Get task grads. ∀i ∈ [n], Bi ∼ Ti
// Learn a new meta head φ∗ ← estimate meta head(B∗tr, β2, θ, φ∗) // B∗tr ∼ T ∗train g∗meta ← ∇θLT∗(θ, φ∗, B∗val) // B∗val ∼ T ∗val // Update task weightings w∗ ← w∗ + η cos(g∗meta, g∗θ) wi ← wi + η cos(g∗meta, giθ) // Update task parameters α∗, α1, . . . , α|Taux| = softmax(w ∗, w1, . . . , w|Taux|)
Update θbody ← θbody − β1 ( α∗g∗θ + ∑ i αig i θ ) Update ( φi ← φi − β2giφ ) , ( φ′ ← φ′ − β2g∗φ
) end Result : θ, φ′
A.4 VISION EXPERIMENTS
We validate that the gains from end-task Aware Training are not siloed to only learning from text. We conduct an experiment comparing end-task aware training on images to its end-task agnostic variant. We use the Cifar100 dataset (Krizhevsky et al., 2009). We use the Medium-Sized Mammals superclass (one of the 20 coarse labels) as our main task whilst the other 19 super classes are used as auxiliary data. Our primary task is thus a 5-way classification task of images different types of
medium-sized mammals whilst whilst the remaining 95 classes are grouped into a single auxiliary task.
As can be seen from Table 5, being end-task aware improves over task agnostic pre-training. We find that, again, when our auxiliary task consist of solely domain data and no task data, METATARTAN performs better than MT-TARTAN (as measured by averaged performance).
A.5 FULL TAPT TABLE WITH SIGNIFICANCE LEVELS
We repeat Table 1 and provide details about levels of statistical signifance.
Task TAPT MT-TARTAN p−values META-TARTAN p−values
Task TAPT META-TARTAN p−values
A.6 FULL DAPT/DAPT+TAPT TABLE
We repeat Table 3 and provide details about levels of statistical signifance.
A.7 FAQ
1. What settings are TARTAN algorithms designed for? TARTAN algorithms specialize auxiliary objectives to a particular end-task. This comes at a risk of losing the generic representations afforded by generalist pre-trained models. Thus if a practitioner has a sufficiently important end-task where obtaining improved end-task performance is paramount over generic representations, then TARTAN is a viable option.
2. When do we get computational savings from META-TARTAN? MT-TARTAN does not add any extra overhead compared to pre-train then fine-tune approaches. META-TARTAN however, adds extra overhead per gradient descent step due to computing meta-gradients. However, as shown in Section 5 we are able to get several orders of magnitude improvement in data-efficiency from applying the method. In general,
for the tasks we experimented with, we find that the savings in data-efficiency superseded the extra per-timestep meta-learning overhead.
3. When should we use META-TARTAN over MT-TARTAN? In +TAPT settings (Tables 1, 3), we observe that META-TARTAN and MT-TARTAN perform similarly. We attribute this to the strength of TAPT-MLM objective. We were pleasantly surprised that the two methods performed comparatively in this setting but in hindsight, we appreciate the insight that went into designing TAPT-MLM as an objective which makes it a strong baseline. In other settings with less carefully designed auxiliary objectives and data (which can potentially be detrimental to the end-task) we expect METATARTAN to perform better. Section 5.3 provides evidence of this. | 1. What is the main contribution of the paper regarding multi-task learning and end-task performance?
2. What are the strengths of the proposed method, particularly in its ability to adapt weights during training?
3. What are the weaknesses of the paper, especially regarding its limited setting and mixed results in experiments?
4. How does the reviewer assess the novelty and effectiveness of the proposed method compared to other multi-task learning works?
5. What are some minor concerns and suggestions for improvement regarding the comparisons with other methods and the extension to generation tasks? | Summary Of The Paper
Review | Summary Of The Paper
The author argues that direct training on both pre-training task and fine-tuning task would lead to better end-task performance. The key is to incorporate the end task objective function to the pre-training stage.
Review
The paper focuses on the continued pre-training setting and it proposes a multi-task end-task aware training method (MT-TARTAN) and a meta-learning variant to achieve better performance than the pre-training + fine-tuning paradigm.
The idea to compare co-training with pre-training-then-fine-tuning is interesting and a promising direction. The proposed method is well motivated to adapt weights during training for the auxiliary tasks and the domain task. The analysis about the tasks weight is interesting in figure 3 and 4, which shows some guidance on how to choose training tasks and dynamically changing the weights during MTL.
There’re a few minor concerns about the paper:
Lacking comparisons with other multi-task learning (MTL) work about weight selection One of the main contributions is to use meta-learning for choosing the weights of each pre-training task. There's a line of work regarding weight selection for MTL. How is meta learning compared to these methods in terms of performance, training speed, etc?
[1] Guo, Michelle et al. “Dynamic Task Prioritization for Multitask Learning.” ECCV (2018).
[2] Kendall, Alex et al. “Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 7482-7491.
The setting studied here is kind of limited. Only focusing on low-resource classification tasks. The experiments cover three datasets and show mixed results. The META-TARTAN doesn’t show much improvement without OOD auxiliary data.
According to Table 1, the META learning version seems only slightly on two tasks while worse on the other for TAPT. Is there any explanation here?
It’s more convincing to add more fine-tuning datasets and analyze the effect of dataset size and final performance for the proposed method.
One merit about DAPT is that for similar tasks belonging to the same domain, you only need to train DAPT once and then do TAPT. On the other hand, the proposed method always needs to mix in pre-training data to fine-tuning tasks. Given this, TARTAN also introduces overhead and it’s not 100% true that DAPT is more data efficient given that you might only need to go through it once. Not to mention many domains actually have plenty of unlabeled corpus.
A few more questions and comments:
What are the number of auxiliary tasks? Seems like for DAPT, TAPT and DAPT+TA, there’s only one auxiliary task? The MLM on either the domain corpus or the target corpus? If that’s the case, then it seems to be too few tasks to be worth selecting from.
How hard is it to extend the proposed method to generation tasks instead of discriminative tasks?
It’s until the experimental section that I realize the paper focuses on continued pre-training rather than pre-training from scratch. I guess it requires some revision to make that clearer at the beginning of the paper. |
ICLR | Title
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Abstract
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
N/A
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
1 INTRODUCTION
The increasingly popular pre-training paradigm (Dai & Le, 2015; Devlin et al., 2018; Gururangan et al., 2020) involves first training a generalist model on copious amounts of easy-to-obtain data, e.g. raw text data in NLP, and then using this model to initialize training on a wide swath of downstream tasks. Generalist models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have a strong appeal; a few institutions with significant resources incur the cost of training these large models whilst the rest of the research community enjoys a significant performance improvement at minimal computational overhead. However, the advantages of initializing a downstream task from a generalist model are not guaranteed. Previous work has shown that the benefits of pre-training depend heavily on the degree of domain overlap between the end-task data and the massive, heterogenous data on
which the generalist model was trained (Beltagy et al., 2019; Gururangan et al., 2020).
Notably, Gururangan et al. (2020) have demonstrated the benefits of continued pre-training of generalist models using data that is similar to that of the end-task. Their approach is formalized into two classes: Domain Adaptive Pre-training (DAPT) and Task Adaptive Pretraining (TAPT) where further stages of pre-training of generalist models are conducted on domain- and task-specific data,
respectively. DAPT and TAPT exploit the fact that we often know the end-task beforehand, and so we can make specific choices about our pre-training regimen to improve end-task performance.
However, in both pre-training for generalist models and continued pre-training, the training procedure itself does not explicitly incorporate the end-task objective function. Because of this, practitioners have to be careful with their choice of auxiliary tasks, the order in which they are trained on, and the early-stopping criteria for each pre-training stage so as to actually achieve good downstream end-task performance (Gururangan et al., 2020; Dery et al., 2021). In the absence of principled criteria to make these difficult design choices, it is common to instead resort to the computationally demanding heuristic of pre-training on as much data as possible for as long as possible.
In this paper, we raise the following question: “In settings where we have a particular end-task in mind, should we be pre-training at all?”. We define pre-training as any form of task-agnostic training that a model undergoes before it is finally fine-tuned on the end-task of interest. As a first milestone in addressing the larger question posed above, we explore the ubiquitous continued pretraining setting (Gururangan et al., 2020; Aghajanyan et al., 2021). Specifically, our paper questions the wisdom of having disjoint further pre-training then fine-tuning steps on a generalist model. In response, we advocate for an alternative approach in which we directly introduce the end-task objective of interest into the learning process. This results in a suite of end-task aware methods called TARTAN (end-Task AwaRe TrAiniNg). Our formulations incorporate both unsupervised auxiliary objectives traditionally used in NLP pre-training (such as masked language modeling as in Devlin et al. (2018)) and the end-task objective, followed by an optional fine-tuning step on the end-task. We motivate TARTAN experimentally in the continued pre-training setting and based on this, we make the following contributions to the literature on leveraging auxiliary tasks and data:
• In lieu of standard end-task agnostic continued pre-training, we suggest introducing the end-task objective into the training process via multi-task learning (Caruana, 1997; Ruder, 2017). We call this procedure Multi-Tasking end-Task AwaRe TrAiniNg (MT-TARTAN) (Section 3.1). MTTARTAN is a simple yet surprisingly effective alternative to task-agnostic pre-training. In Section 5, we demonstrate that MT-TARTAN significantly improves performance and data efficiency over Gururangan et al. (2020)’s results. It also obviates the need for fickle hyper-parameter tuning through direct optimization of validation performance. • To allow more fine-grained control of the end-task over the auxiliary tasks, in Section 3.2, we present an online meta-learning algorithm that learns adaptive multi-task weights with the aim of improving final end-task performance. Our META-learning end-Task AwaRe TrAiniNg (METATARTAN) allows us to robustly modulate between multiple objectives and further improves performance over MT-TARTAN . • A naive implementation of META-TARTAN based on first-order meta-learning analysis results in a sub-optimal algorithm that ignores all tasks except the end-task. We trace this problem to the use of a single model training head for computing both the end-task training loss and meta-objective (end-task validation loss). To guard against this pathological solution, we introduce a separate model head for computing the meta-objective. In Section 3.3, we justify this simple-to-implement fix and validate its practical efficacy in Section 5.
Our results suggest that TARTAN may be an attractive alternative to the continued pre-training paradigm, and further research into the place of pre-training in end-task aware settings is warranted.
2 FORMALIZING PRE-TRAINING AND CONTINUED PRE-TRAINING
Consider a dataset D = {(xi, yi)i∈[m]} consisting of m labelled examples. We define a task as an objective function and dataset pair: T = {L(·), D}. Mθ is a model parameterized by θ. The objective function L(yi,Mθ(xi)) evaluates how well a model prediction Mθ(xi) fits the true label yi, such as cross-entropy loss in the case of classification. Note that the task dataset, D, is typically decomposed into the sets (Dtrain, Dval, Dtest). Dtrain is the set of examples used for model training whilst Dtest is used for final task evaluation. The validation set, Dval, is typically used for model selection but it is also frequently used in meta-learning to define the meta-objective – Lval. Given a specific end-task T ∗, our aim is to improve performance on T ∗ (as measured by the model loss on DtestT∗ ) by leveraging auxiliary tasks Taux = {T1, . . . , Tn}. Note that we do not particularly
care about the performance of any of the tasks in Taux. We are willing to sacrifice performance on Taux if it improves performance on T ∗.
From the perspective of model architecture, there are several ways to leverage Taux. We focus on the simple but widely-used parameter sharing setting. Here, all tasks share a model body θbody but each task Ti has its own head φi for prediction. We denote the head belonging to T ∗ as φ′. Thus θ = [ θbody; ( φ1, . . . , φn, φ′ )] and θbody is reusable across new tasks.
2.1 PRE-TRAINING
Pre-training is when a model is first trained on Taux before performing a final fine-tuning phase on T ∗. The motivation behind pre-training is that learning Taux first hopefully captures relevant information that can be utilized during training of T ∗. This desire has led to the proliferation of generalist pre-trained models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and GPT3 (Brown et al., 2020) that have been trained on copious amounts of data. Generalist models have been widely successful at improving downstream task performance when used as initilization.
We can formalize the pre-training procedure as follows:
θ0 = argminθ ( ∑ Ti∈Taux LTi(θ) )
(1)
In Equation 1, we seek a point θ0 that achieves minimal loss on the tasks in Taux. We hope that θ0 will be a good starting point for gradient descent on T ∗. Let g(θ0) represent the set of end-points of stochastic gradient descent on an initialization, θ0. Stochastic gradient descent from the same initialization can produce different end-points due to differences in hyper-parameters like learning rate, batch size and order, as well as regularization strength. We can write the fine-tuning phase as:
θ∗ = argmin{θ ∈ g(θ0)} LT∗(θ) (2)
Note that pre-training is end-task agnostic: the pre-training Equation 1 occurs entirely before training on the end-task Equation 2, and does not explicitly incorporate the end-task objective, T ∗. Since there is no awareness of the end-task during pre-training it is important to carefully choose Taux so that pre-training actually results in improved performance on T ∗ (Wang et al., 2018a). For text data, past work has found left-to-right language modeling (Peters et al., 2017) and masked language modeling (MLM) (Devlin et al., 2018) to be good choices to include in Taux.
2.2 CONTINUED PRE-TRAINING
Recent work (Beltagy et al., 2019; Gururangan et al., 2020; Lee et al., 2020) showed that downstream performance on T ∗ can be improved by further adapting generalist models via continued pre-training on a more relevant set of auxiliary tasks. This is equivalent to sequentially performing multiple steps of Equation 1, with different Taux, before finally performing Equation 2 on T ∗. Domain and Task Adaptive Pre-training Gururangan et al. (2020) present Domain Adaptive PreTraining (DAPT) and Task Adaptive Pre-Training (TAPT) as methods for continued pre-training. During DAPT, a generalist model is further pre-trained on an unsupervised objective with large amounts of data from the same domain as the end-task. TAPT also pre-trains with the same unsupervised objective as DAPT, but on the actual dataset of the end-task. Gururangan et al. (2020) find that performance can be further improved by chaining objectives, DAPT first, followed by TAPT.
Though TAPT and DAPT do not directly incorporate the end-task objective during training, it still indirectly informs both the choice of pre-training data and the order in which the pre-training tasks are trained on. Below, we explore stronger versions of this influence.
3 END-TASK AWARE TRAINING (TARTAN)
In this section, we argue for the end-task to be added directly into the training process to create explicit interactions between T ∗ and Taux.
3.1 END-TASK AWARE TRAINING VIA MULTI-TASKING (MT-TARTAN)
We propose to directly incorporate knowledge of the end-task by multi-tasking T ∗ together with Taux, before optionally fine-tuning on T ∗ exclusively. To this end, we introduce a set of task weights w = (w∗, w1, · · · , w|Taux|) satisfying w∗ + ∑ i wi = 1, to modulate between the different losses. Our new formulation is:
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ i wiLTi(θ) )
(3)
Here, Equation 3 replaces Equation 1 and can be followed by the optional fine-tuning stage of Equation 2. Note that this formulation fixes the tasks weights w throughout the training process. We call this formulation End-task Aware Training via Multi-tasking (MT-TARTAN) because we introduce the end-task directly into the training procedure, and do so by multi-tasking it with Taux.
MT-TARTAN allows us to prioritize performance on T ∗ in several ways. First, we can weight the end-task higher than all the other auxiliary tasks. Also, during training, we can monitor LT∗ on the end-task validation set and early stop when it plateaus; even if the auxiliary tasks have not yet converged. This is not possible during standard pre-training because we do not train T ∗ and so it performs at random before we actually start fine-tuning. Early stopping on T ∗ can represent significant computational savings over end-task agnostic pre-training when the savings in data-efficiency supercede the extra overhead of end-task aware gradient descent steps.
3.2 END-TASK AWARE TRAINING VIA META-LEARNING (META-TARTAN)
MT-TARTAN, DAPT and TAPT, all share the same drawback: they implicitly assume that the auxiliary tasks have static importance to the end-task over the lifetime of its training, either by being end-task agnostic (DAPT and TAPT) or by having static task weights (MT-TARTAN). With MTTARTAN, an additional drawback noted by Wang et al. (2019); Yu et al. (2020) is that multi-tasking can negatively impact task performance compared to isolated training. These shortcomings motivate the formulation of an adaptive algorithm that can mitigate the negative influence of some tasks whilst responding to the changing relevance of auxiliary tasks over the lifetime of end-task training.
As they stand, the pre-training equation pair (Equations 1, 2) and the MT-TARTAN pair (Equations 2, 3) are decoupled. The inner-level variables of the pre-training phase do not depend on the outerlevel variables of the fine-tuning phase. Thus the equation pairs are typically solved sequentially. We propose to tightly couple Equations 2 and 3 by formulating jointly learning w and θ0 as a bi-level optimization problem. A bi-level formulation allows us to leverage meta-learning (Schmidhuber, 1995) techniques to learn adaptive task weights which capture variable auxiliary task importance whilst mitigating the contribution of harmful tasks. We propose a meta-learning algorithm in the mold of Model Agnostic Meta-Learning (MAML) (Finn et al., 2017) to learn task weights. As a bi-level problem, this can be formulated as :
θ∗,w∗ = argmin{θ ∈ g(θ0), w} LT∗(θ) (4) where
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ Ti∈Taux wiLTi(θ) )
(5)
We want to jointly learn w, with θ0, such that taking a gradient descent step modulated by w leads to improvement in end-task generalization. We use performance on the end-task validation set (DvalT∗ ) as a meta-objective to train w. Performance on DvalT∗ serves as a stand-in for end-task generalization performance whilst also naturally capturing the asymmetrical importance of T ∗.
Our joint descent algorithm proceeds as follows. At each timestep t, we hold the task weights fixed and update θt based on ∇θLtotal(θt,w). We then proceed to update w via gradient descent on the end-task validation loss at θt+1. For this, we derive an approximation for∇wLvalT∗ (θt+1,w) below:
LvalT∗ (θt+1(w)) = LvalT∗ ( θt − β ( w∗∇LT∗ + ∑ i wi∇LTi ))
≈ LvalT∗ (θt)− β ( w∗∇LT∗ + ∑ i wi∇LTi )T ∇LvalT∗ (θt)
We can take the gradient of the above first-order approximation w.r.t an individual weight wi. This tells us how to update wi to improve the meta-objective.
∂LvalT∗ (θt+1(w)) ∂wi
≈ −β ( ∇LTi )T (∇LvalT∗ (θt)) = −β(∇LTi)T (∇LvalT∗ ([θbody, φ′]t)) (6) In Equation 6, we explicitly specify [ θbody, φ ′] t
because computing losses on T ∗ depend on only these parameters. LTi depends solely on [ θbody, φ i ] t but we leave this out to avoid notation clutter.
Our analysis above is similar to that of Lin et al. (2019) with one key difference: we learn a weighting for the main task w∗ too. This ability to directly modulate T ∗ allows us to capture the fact that at certain stages in training, auxiliary tasks may have greater impact on end-task generalization than the end-task’s own training data. This choice also allows us to control for over-fitting and the influence of bad (mislabelled or noisy) training data.
3.3 INTRODUCING A SEPARATE CLASSIFICATION HEAD FOR META-LEARNING
Observe that from Equation 6, updates forw 6= w∗ involve gradients computed from different model heads φi and φ′ whilst for w∗, we are taking the dot product of gradients from the same end-task head φ′. As we will show empirically in Section 5.4, computing weight updates this way creates a strong bias towards the primary task, causing w∗ to rail towards 1 whilst the other weights dampen to 0, which may be sub-optimal in the long run.
Intuitively, this short-horizon (greedy) (Wu et al., 2018) behavior makes sense: the quickest way to make short-term progress (improve LvalT∗ (θt+1)) is to descend solely on T ∗. More formally, the greedy approach arises because we derive ∇wiLvalT∗ (θt+1) in Equation 6 as a proxy for the gradient at θ∗, the outer-loop end-point in Equation 4. Variations of this substitution are common in the meta-learning literature (Finn et al., 2017; Liu et al., 2018; Nichol et al., 2018) because it is computationally infeasible to train a model to convergence every time we wish to compute∇wiLvalT∗ (θ∗).
To remedy the greedy solution, instead of estimating ∇θLT∗ and ∇θLvalT∗ from the same classification head (Equation 6), we introduce a special head φ∗ for computing the meta-objective. Specifically, instead of trying to compute θ∗, we approximate it by fixing the body of the network θbody and training the randomly initialized head φ∗ to convergence on a subset of the end-task training data. We do this every time we wish to estimate∇wiLvalT∗ (θ∗). Introducing φ∗ eliminates the strong positive bias on w∗ and enables us to compute a better proxy for the meta-gradient at θ∗:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t)) (7) Equation 7 represents a simple-to-implement alternative to Equation 6. We provide a more detailed justification for Equation 7 in Appendix A.1. In Section 5.4, we empirically validate that the transition from Equation 6 to 7 improves performance whilst mitigating pathological solutions. Our approach of creating φ∗ for approximating the meta-objective (down-stream validation performance) is inspired by Metz et al. (2018), who use a similar technique to construct a meta-objective for evaluating the quality of unsupervised representations.
Please see Algorithm 1 in Appendix A.3 for details about META-TARTAN.
4 EXPERIMENTAL SETUP
Setting1 Though our algorithms and methodology can be directly applied to both continued pretraining (Section 2.2) and pre-training from scratch (Section 2.1) of generalist models, we focus on the former scenario. This is because the continued pre-training setting is more common amongst everyday practitioners as it is less computationally demanding. It thus lends itself more easily to exploration under a realistic computational budget. In Appendix A.4, we show that end-task aware training from scratch is viable by studying a simple computer vision setting. Concurrent work by Yao et al. (2021) shows that from-scratch end-task aware training for NLP problems is viable.
1Code will be released at https://github.com/ldery/TARTAN
In keeping with previous work (Devlin et al., 2018; Gururangan et al., 2020), we focus on Taux as a set of MLM tasks on varied datasets. In the case of DAPT and our end-task aware variants of it, Taux is an MLM task with data from the domain of the end-task. For TAPT, Taux is an MLM task with data from the end-task itself. DAPT, TAPT and DAPT+TAPT (chained pre-training with DAPT followed by TAPT) will serve as our baseline continued pre-training approaches. We will compare these baselines to their end-task aware variants that use MT-TARTAN and META-TARTAN.
Datasets Our experiments focus on two domains: computer science (CS) papers and biomedical (BIOMED) papers. We follow Gururangan et al. (2020) and build our CS and BIOMED domain data from the S2ORC dataset (Lo et al., 2019). We extract 1.49M full text articles to construct our CS corpus and 2.71M for our BIOMED corpus. Under both domains, our end-tasks are low-resource classification tasks. Using low-resource tasks allows us to explore a setting where pre-training can have a significant impact. Under the CS domain, we consider two tasks: ACL-ARC (Jurgens et al., 2018) and SCIERC (Luan et al., 2018). ACL-ARC is a 6-way citation intent classification task with 1688 labelled training examples. For SCIERC, the task is to classify the relations between entities in scientific articles. This task has 3219 labelled examples as training data. We choose CHEMPROT (Kringelum et al., 2016) as the classification task from the BIOMED domain. This task has 4169 labelled training examples and the goal is to classify chemical-protein interactions. More details of these datasets can be found in Table 2 of Gururangan et al. (2020). Gururangan et al. (2020) evaluate against all 3 tasks and their available code served as a basis on which we built MT-TARTAN and META-TARTAN.
Model Details We use a pre-trained RoBERTabase (Liu et al., 2019) as the shared model base and implement each task as a separate multi-layer perceptron (MLP) head on top of this pre-trained base. As in Devlin et al. (2018), we pass the [CLS] token embedding from RoBERTabase to the MLP for classification.
Training Details For DAPT and TAPT, we download the available pre-trained model bases provided by Gururangan et al. (2020). To train thier corresponding classification heads, we follow the experimental setup described in Appendix B of Gururangan et al. (2020).
Performing end-task aware training introduces a few extra hyper-parameters. We fix the other hyperparameters to those used in Gururangan et al. (2020). MT-TARTAN and META-TARTAN introduce joint training of a classification head for the end-task T ∗. We experiment with batch sizes of 128, 256 and 512 for training this head. We try out learning rates in the set {10−3, 10−4, 10−5} and drop out rates of {0.1, 0.3}. For META-TARTAN since we are now learning the task weights, w, we test out task weight learning rates in {10−1, 5 × 10−2, 3 × 10−2, 10−2}. Note that for all MTTARTAN experiments we use equalized task weights 1|Taux|+1 . A small grid-search over a handful of weight configurations did not yield significant improvement over the uniform task weighting. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments.
As mentioned in section 3.3, we train as separate meta-classification head, φ∗, to estimate the validation meta-gradients. To estimate φ∗, we use batch sizes of {16, 32} samples from T ∗’s train set. We regularize the meta-head with l2 weight decay and set the decay constant to 0.1. We use a learning rate 10−3 to learn the meta-head. We stop training φ∗ after 10 gradient descent steps.
5 RESULTS AND DISCUSSION
In this section, we will discuss the results of comparing our models against DAPT and TAPT baselines.2 Broadly, we demonstrate the effectiveness of end-task awareness as improving both performance and data-efficiency.
2Our results are slightly different from those presented in Table 5 of Gururangan et al. (2020) in terms of absolute values but the trends observed there still hold here. We attribute these differences to (1) minor implementation differences, and (2) averaging performance over ten seeds instead of five as used in the original paper in order to more strongly establish statistical significance. We observe slightly lower performance on ACL-ARC and SCIERC tasks due to these changes and higher performance on CHEMPROT.
Domain Task RoBERTa TAPT MT-TARTAN META-TARTAN
5.1 END-TASK AWARENESS IMPROVES OVER TASK-AGNOSTIC PRE-TRAINING
Table 1 compares TAPT to its end-task aware variants. As in Gururangan et al. (2020), we observe that performing task adaptive pre-training improves upon just fine-tuning RoBERTa. However, note that introducing the end-task by multi-tasking with the TAPT MLM objective leads to a significant improvement in performance. This improvement is consistent across the 3 tasks we evaluate against. We find that both MT-TARTAN and META-TARTAN achieve similar results in this setting.
5.2 END-TASK AWARENESS IMPROVES DATA-EFFICIENCY
Gururangan et al. (2020) train DAPT on large amounts of in-domain data to achieve results competitive with TAPT. They use 7.55 billion tokens for the BIOMED domain and 8.10 billion for the CS domain. This is on average over 104× the size of the training data of our end-tasks of interest. The large amount of data required to train a competitive DAPT model represents a significant computational burden to the every-day practitioner. This begets the question: are such large amounts of auxiliary data necessary for achieving good downstream performance? To answer this, we train DAPT and its TARTAN version on variable amounts of data for both SCIERC and ACL-ARC tasks.
TARTAN is more data-efficient than DAPT In Figure 2, we focus on training on a small fraction of available domain data n = {100, 101}× |Train| for the DAPT auxiliary task. Full domain data is n′ ≈ 104×|Train|. This relatively low auxiliary data regime represents a realistic setting that is akin to those encountered by everyday practitioners who are likely to be computationally constrained. As can be seen in Figure 2, on the ACL-ARC task, META-TARTAN matches the performance of DAPT when the sizes of the domain data and end-task data are of the same order (100). At this data size, META-TARTAN supersedes DAPT on the SCIERC task. When trained on 10×more auxiliary data, META-TARTAN supersedes DAPT in performance on both tasks. On the ACL-ARC task, METATARTAN achieves 71.194.88, which is close to DAPT’s performance of 72.493.28 using more than 103× auxiliary data. These results indicate that end-task awareness can improve data-efficiency and in this case, improvements are on the order of 1000×.
Domain Task DAPT DAPT+TAPT MT-TARTAN META-TARTAN
TARTAN is more data-efficient than DAPT+TAPT Table 2 compares DAPT and DAPT+TAPT (DAPT followed by TAPT) to *-TARTAN which multi-task DAPT, TAPT and the end-task. MTTARTAN and META-TARTAN significantly outperform DAPT and DAPT+TAPT in 2 of the tasks whilst giving higher average performance in the ACL-ARC task. We thus conclude that end-task awareness allows us to get a greater performance boost out of the same amount of data.
We explore the data efficiency of TARTAN methods even further by comparing the relatively data-poor versions of MT-TARTAN and META-TARTAN above (n = 10 × |Train|) to the DAPT and DAPT+TAPT variants trained on all the available domain data (n′ ≈ 104 × |Train|). We can see from Table 3 that for the CS domain, our end-task aware variants come close to (ACL-ARC) and even supersede (SCIERC) the end-task agnostic variants though trained with ≈ 1000× less data. For BIOMED domain (CHEMPROT task), increasing the amount of data drastically improves the performance of end-task agnostic variants compared to MT-TARTAN and META-TARTAN trained on much less data.
Zhang et al. (2020) show that different tasks exhibit sigmoid-like curves in terms of how much pretraining data is required to achieve good results before performance levels off. We contextualize Tables 2 and 3 within said work and posit that the CHEMPROT task intrinsically requires much more data (compared to our other tasks) before performance begins to improve appreciably.
5.3 META-TARTAN MORE EFFECTIVELY UTILIZES OUT-OF-DISTRIBUTION AUXILIARY
DATA OVER MT-TARTAN
TARTAN ACL-ARC SCIERC CHEMPROT MT 69.270.96 81.530.99 80.263.79 META 71.194.88 82.081.19 82.310.75
heterogeneous domain data whose impact on the end-task performance is less clear. Notice from Table 4 that when required to rely solely on domain data for auxiliary tasking, META-TARTAN improves performance over MT-TARTAN. We attribute META-TARTAN’s improvement over MTTARTAN to its ability to more flexibly adapt to incoming data of variable utility to the end-task.
5.4 TASK WEIGHTING STRATEGIES DISCOVERED BY META-LEARNING
To illustrate the importance of the separate classification head φ∗ for computing the meta-signal for the task weights (described in Section 3.3), we run META-TARTAN experiments with ACL-ARC as the end-task and DAPT as the auxiliary task. We compare using either a separate (φ∗) or the same (φ′) classification head for calculating the meta-gradient. Figure 3 plots the task weightings learned in each setting during training. We can clearly see that using a separate head counteracts the pathological solution of down-weighting all tasks that are not T ∗ and as a result, improves performance: a delta of 1.7 F1 points in this case. The strategy discovered by META-TARTAN presents an interesting contrast to classical pre-training: whilst the initial phase of classical pre-training involves
solely the auxiliary task, early in training, META-TARTAN up-weights the auxiliary task but does not fully zero out the end-task. Later in training, we see leveling off of weights instead of railing the end-task to 1 as in classical pre-training.
Next, we plot a similar graph for using both DAPT and TAPT across our three tasks in Figure 4. From the figure, it is apparent that META-TARTAN discovers similar task-weighting strategies across different end-tasks. This suggests that the MLM objective and META-TARTAN’s strategy for learning task weights are generic enough to induce similar behaviours across tasks. In general, DAPT is significantly up-weighted compared to the end-task and TAPT. Note that the TAPT + ACL-ARC task weights (Figure 4) has the same approximate trajectory as ACL-ARC task weight in Figure 3. It seems important to assign high weight to the task data (Figure 3) but not necessarily all of it needs to go to the actual task loss (Figure 4). We hypothesize that the diversity in the domain data counteracts overfitting to the end-task data and results in DAPT being up-weighted.
6 RELATED WORK
Multi-task learning can be traced back to seminal work by Caruana (1995), Caruana (1997), and has since been the subject of a flourishing literature, recent surveys of which can be found in Ruder (2017) or Zhang & Yang (2021). In NLP, while initial work from Collobert & Weston (2008) already showed the benefits of multi-task learning, it has only recently become a central topic in the field, with the advent of multi-task benchmarks (Wang et al., 2018b; McCann et al., 2018).
Pre-training is where a machine learning model is first trained on a generic, data-rich task before being fine-tuned on an end-task. In NLP this practice dates back to the use of pre-trained word embeddings (Turian et al., 2010; Mikolov et al., 2013) and later pre-trained encoders (Kiros et al., 2015; Dai & Le, 2015). Peters et al. (2018) and Howard & Ruder (2018) heralded a renaissance of pre-training before BERT (Devlin et al., 2018) and its many offshoots (Liu et al., 2019; Yang et al., 2019; Lewis et al., 2019) cemented it as the de facto standard for modern NLP.
Meta-learning dates back to early work from Schmidhuber (1995); Thrun (1998). More relevant to our work is gradient-based meta-learning for solving bi-level optimization problems, first popularized by Finn et al. (2017) and followup work (Nichol et al., 2018; Rajeswaran et al., 2019) for few-shot learning. This method has transferred to a variety of applications such as architecture search (Liu et al., 2018) and model poisoning (Kurita et al., 2020).
7 CONCLUSION
We have advocated for a paradigm shift in the way we approach pre-training. We have motivated making pre-training more end-task aware when the end-task is known in advance. Our work introduced two novel end-task aware training algorithms: End-task Aware Training via Multitasking (MT-TARTAN) and End-task Aware Training via Meta-learning (META-TARTAN). In Section 5, we demonstrated the ability of our proposed algorithms to improve performance and dataefficiency over their end-task agnostic counterparts.
This work suggests several promising directions for future work. Instead of learning coarse task level weights, can further performance improvements be achieved via finer-grained example level weighting as in Wang et al. (2020)? Can meta-learning algorithms like META-TARTAN enable more effective utilization of previously discarded (Aroca-Ouellette & Rudzicz, 2020) pre-training auxiliary tasks like Next Sentence Prediction (NSP) (Devlin et al., 2018)? We hope this work spurs conversation around these questions and many more.
8 ACKNOWLEDGEMENTS
This work was supported in part by DSO National Laboratories, an ENS-CFM Data Science Chair, DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies.
9 ETHICS STATEMENT
Our work introduces new algorithms but leverages pre-existing datasets and models. Overall, this work inherits some of the risk of original work upon which it is implemented. Algorithms for continued training such as TAPT and DAPT necessitate per-task training of unsupervised objectives which result in corresponding green-house emissions due to energy consumption (Strubell et al., 2019). However, as shown in Sections 3 and 5, our new compute-efficient algorithms greatly increase the data efficiency of these algorithms, reducing these harms as well as the various harms associated with labor for data-collection (Jo & Gebru, 2020). Also, since our work is set in the context of pre-existing datasets and models (Section 4), we recognize that any ethical issues that have been revealed in these (such as bias (Bender et al., 2021) or privacy leakage (Carlini et al., 2021)) may also propagate to models trained using our work, and mitigation strategies such as Schick et al. (2021); Liang et al. (2021) may be necessary. Finally, there is a potential risk in META-TARTAN that leveraging a validation set for defining the meta-objective could amplifying bias that exists in this data split, although this is done indirectly through task weighting and hence we believe that this risk is small.
10 REPRODUCIBILITY STATEMENT
We pledge to release the source-code for this project to improve the ease of reproducibility of our results by the NLP and machine learning communities. In Section 4, we have specified details about datasets, training regimes and models to allow anyone who wishes to reproduce our results without our original source code to do so. Our discussion of the algorithmic and evaluation details can be found in Appendices A.1, A.3 and A.2. As we noted in 4, we build off of Gururangan et al. (2020)’s implementations which can be found at https://github.com/allenai/dont-stop-pretraining.
A APPENDIX
A.1 JUSTIFYING THE INTRODUCTION OF A META-HEAD
Proof. To arrive at Equation 7 we start with the closed form solution for ∇wiLvalT∗ (θ∗) and then introduce approximations in order to produce Equation 7. First, note that :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T( ∇wiθ∗(w) ) [Chain rule] (8)
To get ∇wiθ∗(w) we invoke the Cauchy Implicit Function Theorem (IFT) as with Lorraine et al. (2020); Navon et al. (2020); Liao et al. (2018):
∇wiθ∗(w) = [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θLtotal(θ∗(w)) ] [IFT]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θ ( w∗LT∗(θ∗(w)) + ∑ Ti∈Taux wiLTi(θ∗(w)) )]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ] [Only terms with wi survive]
Bringing it all together, we get :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T([ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ]) (9)
Computing ∇wiLvalT∗ (θ∗) from Equation 9 is computationally unwieldy since we would not only have to optimize θ to convergence for every step of wi but we would also have to invert the Hessian of a typically large model. Our middle ground between Equations 9 and 6 (Equation 7) makes use of the following approximations:
• We approximate the inverse Hessian with the identity. This approximation is not new; we follow previous work like Lorraine et al. (2020)(Table 3) who explore the use of this approximation because of computational efficiency.[
∇2θLtotal(θ∗(w)) ]−1
= lim i→∞ i∑ j=0 ( I−∇2θLtotal(θ∗(w)) )j ≈ I
We are assuming the contribution of terms with i > 0 are negligible.
• Instead of training the whole network to convergence, at each time-step, we fix the body of the network and train a special head φ∗ to convergence on a small batch of end-task training data. We then use [θbody;φ∗] as a proxy for θ∗. This is a computationally feasible workaround to training all of θ to convergence to get a single step gradient estimate. Especially in the continued pre-training setting where a pre-trained generalist model like BERT is used as θbody, this approximation is reasonable. To our knowledge, we are the first to suggest this approximation.
∇θLvalT∗ (θ∗)→ ∇θLvalT∗ ([θbody;φ∗])
• Above, we have approximated θ∗ = [θbody;φ∗]. Since φ∗ is only used to evaluate end-task (T ∗) validation data, it means θ remains unchanged with respect to the training data for task Ti. Thus ∇θLTi([θbody; ( φ∗, . . . , φi ) ]) = ∇θLTi([θbody;φi]) = ∇θLTi(θ)
Bringing it all together, we get Equation 7, repeated here:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t))
A.2 CALCULATING P-VALUES FROM PERMUTATION TEST
We used the permutation test (Good, 2005; Dror et al., 2018) to test for statistical significance. For each test, we generate 10000 permutations to calculate significance level. This is sufficient to converge to a stable p-value without being a computational burden. We chose this over the common student t-test because :
1. We have only 10 runs per algorithm and permutation tests are more robust at low sample size
2. Permutation test is assumption free. Student t-tests assume that the samples are normally distributed
3. Permutation test is robust to variance in the samples, so even though error-bars can overlap, we still establish significant differences in the samples. Variance in our results is expected due to small dataset sizes of end-tasks.
A.3 ALGORITHM FOR META-TARTAN
Algorithm 1: End-task Aware Training via Meta-learning (META-TARTAN) Require: T ∗,Taux: End-task, Set of auxiliary pre-training tasks Require: η, β1, β2: Step size hyper-parameters Initialize :
Pre-trained RoBERTa as shared network body, θbody Task weightings: w∗, wi = 1|Taux|+1
Randomly initialize : end-task head as φ′ meta head for end-task as φ∗ task head, φi, for each Ti ∈ Taux while not done do B∗tr ∼ T ∗train // Sample a batch from end-task
g∗θ , g ∗ φ ← [ ∇θ,∇φ′ ]( LT∗(θ, φ′, B∗tr) ) // Get end-task grads
giθ, g i φ ← [ ∇θ,∇φi ]( LTi(θ, φi, Bi) ) // Get task grads. ∀i ∈ [n], Bi ∼ Ti
// Learn a new meta head φ∗ ← estimate meta head(B∗tr, β2, θ, φ∗) // B∗tr ∼ T ∗train g∗meta ← ∇θLT∗(θ, φ∗, B∗val) // B∗val ∼ T ∗val // Update task weightings w∗ ← w∗ + η cos(g∗meta, g∗θ) wi ← wi + η cos(g∗meta, giθ) // Update task parameters α∗, α1, . . . , α|Taux| = softmax(w ∗, w1, . . . , w|Taux|)
Update θbody ← θbody − β1 ( α∗g∗θ + ∑ i αig i θ ) Update ( φi ← φi − β2giφ ) , ( φ′ ← φ′ − β2g∗φ
) end Result : θ, φ′
A.4 VISION EXPERIMENTS
We validate that the gains from end-task Aware Training are not siloed to only learning from text. We conduct an experiment comparing end-task aware training on images to its end-task agnostic variant. We use the Cifar100 dataset (Krizhevsky et al., 2009). We use the Medium-Sized Mammals superclass (one of the 20 coarse labels) as our main task whilst the other 19 super classes are used as auxiliary data. Our primary task is thus a 5-way classification task of images different types of
medium-sized mammals whilst whilst the remaining 95 classes are grouped into a single auxiliary task.
As can be seen from Table 5, being end-task aware improves over task agnostic pre-training. We find that, again, when our auxiliary task consist of solely domain data and no task data, METATARTAN performs better than MT-TARTAN (as measured by averaged performance).
A.5 FULL TAPT TABLE WITH SIGNIFICANCE LEVELS
We repeat Table 1 and provide details about levels of statistical signifance.
Task TAPT MT-TARTAN p−values META-TARTAN p−values
Task TAPT META-TARTAN p−values
A.6 FULL DAPT/DAPT+TAPT TABLE
We repeat Table 3 and provide details about levels of statistical signifance.
A.7 FAQ
1. What settings are TARTAN algorithms designed for? TARTAN algorithms specialize auxiliary objectives to a particular end-task. This comes at a risk of losing the generic representations afforded by generalist pre-trained models. Thus if a practitioner has a sufficiently important end-task where obtaining improved end-task performance is paramount over generic representations, then TARTAN is a viable option.
2. When do we get computational savings from META-TARTAN? MT-TARTAN does not add any extra overhead compared to pre-train then fine-tune approaches. META-TARTAN however, adds extra overhead per gradient descent step due to computing meta-gradients. However, as shown in Section 5 we are able to get several orders of magnitude improvement in data-efficiency from applying the method. In general,
for the tasks we experimented with, we find that the savings in data-efficiency superseded the extra per-timestep meta-learning overhead.
3. When should we use META-TARTAN over MT-TARTAN? In +TAPT settings (Tables 1, 3), we observe that META-TARTAN and MT-TARTAN perform similarly. We attribute this to the strength of TAPT-MLM objective. We were pleasantly surprised that the two methods performed comparatively in this setting but in hindsight, we appreciate the insight that went into designing TAPT-MLM as an objective which makes it a strong baseline. In other settings with less carefully designed auxiliary objectives and data (which can potentially be detrimental to the end-task) we expect METATARTAN to perform better. Section 5.3 provides evidence of this. | 1. What are the strengths and weaknesses of the proposed methods, particularly META-TARTAN?
2. How does the method address the pathological solution problem in meta-learning?
3. Can you provide a figure illustrating the data, parameters, and optimization steps for META-TARTAN?
4. What is the practical utility of META-TARTAN compared to MT-TARTAN?
5. How does the computation cost comparison between TARTAN and other methods consider the difference in computational expense per step?
6. Is involving the validation set in computing and optimizing the meta-objective a fair practice?
7. Are the results in the paper inclusive of an optional fine-tuning step on the end task?
8. Why is DAPT-MLM gradually down-weighted in Figure 3 but up-weighted in Figure 4? | Summary Of The Paper
Review | Summary Of The Paper
In the paper the authors propose TARTAN, methods to enable end-task aware pre-training. MT-TARTAN simply combines the pre-training objectives and the end-task objective as multi-task learning. META-TARTAN learns a set of weights for the pre-training objective and the end-task objective, using meta-learning. MT-TARTAN and META-TARTAN shows improved performance and data efficiency in a set of three low-resource text classification tasks.
Review
Strengths:
The authors advocate for / introduce end-task aware (continue) pre-training, which is a new and practical problem setting for NLP practitioners.
The authors overcomes the pathological solution problem in their META-TARTAN methods by introducing a separate classification head
ϕ
∗
. Given that meta-learning is known to be unstable and have optimization challenges, the authors' observation and solution is helpful.
Improved performance (with significance test) and data efficiency compared to DAPT and TAPT proposed in Gururangan et al. 2020.
Weakness:
The description of the META-TARTAN method is not clear enough.
A figure illustrating the data, parameters and optimization steps for META-TARTAN would be very helpful. (e.g., illustrating the relation of
θ
b
o
d
y
,
ϕ
1
,
ϕ
2
,
.
.
.
,
ϕ
′
, how meta-objective is computed using weights
w
and parameters
θ
, where does the separate classification head come in, ...)
I get to understand the method better (e.g., whether
ϕ
∗
is re-initialized at every time stamp
t
, whether
ϕ
′
is still being optimized when
ϕ
∗
appears) only after seeing Algorithm 1 in the appendix. If space allows, please enrich the Sec 3.2-3.3 or move the algorithm to the main text.
Though the authors put lots of efforts in META-TARTAN, it appears that META-TARTAN is comparable with MT-TARTAN in most cases. META-TARTAN seems to be better in the DAPT-only setting (Sec 5.3); however since we're doing task-aware pre-training, TAPT can be easily achieved in this case. I wonder what is the practical utility of META-TARTAN.
The paper needs more thorough discussion on computation costs. Computation savings is claimed to be one major advantage of TARTAN. However the comparison is made according to training iterations/steps, while one step in META-TARTAN is much more computationally-expensive than one step in DAPT or TAPT. Please take this into consideration when discussion computation savings.
Questions and further discussion:
I'm not sure if involving the validation set
D
T
∗
v
a
l
in computing and optimizing the meta-objective is a fair practice. DAPT/TAPT only access it for model selection or early stopping; while META-TARTAN can use it to update meta-parameters
w
and indirectly influence the training for model parameters
θ
. Therefore META-TARTAN has more advantage in terms of data usage; this should be discussed in the paper.
The authors mentioned "an optional fine-tuning step" on the end task in the introduction. Does the results in the paper include this step or not?
I wonder why in Figure 3 DAPT-MLM is gradually down-weighted, but in Figure 4 DAPT-MLM is up-weighted. Seems to be contradicting. What does Figure 3 look like if we have more iterations?
Thanks the authors for their hard work! |
ICLR | Title
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
Abstract
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
N/A
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continuedpretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pretrained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on endtask performance and data efficiency.
1 INTRODUCTION
The increasingly popular pre-training paradigm (Dai & Le, 2015; Devlin et al., 2018; Gururangan et al., 2020) involves first training a generalist model on copious amounts of easy-to-obtain data, e.g. raw text data in NLP, and then using this model to initialize training on a wide swath of downstream tasks. Generalist models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and GPT-3 (Brown et al., 2020) have a strong appeal; a few institutions with significant resources incur the cost of training these large models whilst the rest of the research community enjoys a significant performance improvement at minimal computational overhead. However, the advantages of initializing a downstream task from a generalist model are not guaranteed. Previous work has shown that the benefits of pre-training depend heavily on the degree of domain overlap between the end-task data and the massive, heterogenous data on
which the generalist model was trained (Beltagy et al., 2019; Gururangan et al., 2020).
Notably, Gururangan et al. (2020) have demonstrated the benefits of continued pre-training of generalist models using data that is similar to that of the end-task. Their approach is formalized into two classes: Domain Adaptive Pre-training (DAPT) and Task Adaptive Pretraining (TAPT) where further stages of pre-training of generalist models are conducted on domain- and task-specific data,
respectively. DAPT and TAPT exploit the fact that we often know the end-task beforehand, and so we can make specific choices about our pre-training regimen to improve end-task performance.
However, in both pre-training for generalist models and continued pre-training, the training procedure itself does not explicitly incorporate the end-task objective function. Because of this, practitioners have to be careful with their choice of auxiliary tasks, the order in which they are trained on, and the early-stopping criteria for each pre-training stage so as to actually achieve good downstream end-task performance (Gururangan et al., 2020; Dery et al., 2021). In the absence of principled criteria to make these difficult design choices, it is common to instead resort to the computationally demanding heuristic of pre-training on as much data as possible for as long as possible.
In this paper, we raise the following question: “In settings where we have a particular end-task in mind, should we be pre-training at all?”. We define pre-training as any form of task-agnostic training that a model undergoes before it is finally fine-tuned on the end-task of interest. As a first milestone in addressing the larger question posed above, we explore the ubiquitous continued pretraining setting (Gururangan et al., 2020; Aghajanyan et al., 2021). Specifically, our paper questions the wisdom of having disjoint further pre-training then fine-tuning steps on a generalist model. In response, we advocate for an alternative approach in which we directly introduce the end-task objective of interest into the learning process. This results in a suite of end-task aware methods called TARTAN (end-Task AwaRe TrAiniNg). Our formulations incorporate both unsupervised auxiliary objectives traditionally used in NLP pre-training (such as masked language modeling as in Devlin et al. (2018)) and the end-task objective, followed by an optional fine-tuning step on the end-task. We motivate TARTAN experimentally in the continued pre-training setting and based on this, we make the following contributions to the literature on leveraging auxiliary tasks and data:
• In lieu of standard end-task agnostic continued pre-training, we suggest introducing the end-task objective into the training process via multi-task learning (Caruana, 1997; Ruder, 2017). We call this procedure Multi-Tasking end-Task AwaRe TrAiniNg (MT-TARTAN) (Section 3.1). MTTARTAN is a simple yet surprisingly effective alternative to task-agnostic pre-training. In Section 5, we demonstrate that MT-TARTAN significantly improves performance and data efficiency over Gururangan et al. (2020)’s results. It also obviates the need for fickle hyper-parameter tuning through direct optimization of validation performance. • To allow more fine-grained control of the end-task over the auxiliary tasks, in Section 3.2, we present an online meta-learning algorithm that learns adaptive multi-task weights with the aim of improving final end-task performance. Our META-learning end-Task AwaRe TrAiniNg (METATARTAN) allows us to robustly modulate between multiple objectives and further improves performance over MT-TARTAN . • A naive implementation of META-TARTAN based on first-order meta-learning analysis results in a sub-optimal algorithm that ignores all tasks except the end-task. We trace this problem to the use of a single model training head for computing both the end-task training loss and meta-objective (end-task validation loss). To guard against this pathological solution, we introduce a separate model head for computing the meta-objective. In Section 3.3, we justify this simple-to-implement fix and validate its practical efficacy in Section 5.
Our results suggest that TARTAN may be an attractive alternative to the continued pre-training paradigm, and further research into the place of pre-training in end-task aware settings is warranted.
2 FORMALIZING PRE-TRAINING AND CONTINUED PRE-TRAINING
Consider a dataset D = {(xi, yi)i∈[m]} consisting of m labelled examples. We define a task as an objective function and dataset pair: T = {L(·), D}. Mθ is a model parameterized by θ. The objective function L(yi,Mθ(xi)) evaluates how well a model prediction Mθ(xi) fits the true label yi, such as cross-entropy loss in the case of classification. Note that the task dataset, D, is typically decomposed into the sets (Dtrain, Dval, Dtest). Dtrain is the set of examples used for model training whilst Dtest is used for final task evaluation. The validation set, Dval, is typically used for model selection but it is also frequently used in meta-learning to define the meta-objective – Lval. Given a specific end-task T ∗, our aim is to improve performance on T ∗ (as measured by the model loss on DtestT∗ ) by leveraging auxiliary tasks Taux = {T1, . . . , Tn}. Note that we do not particularly
care about the performance of any of the tasks in Taux. We are willing to sacrifice performance on Taux if it improves performance on T ∗.
From the perspective of model architecture, there are several ways to leverage Taux. We focus on the simple but widely-used parameter sharing setting. Here, all tasks share a model body θbody but each task Ti has its own head φi for prediction. We denote the head belonging to T ∗ as φ′. Thus θ = [ θbody; ( φ1, . . . , φn, φ′ )] and θbody is reusable across new tasks.
2.1 PRE-TRAINING
Pre-training is when a model is first trained on Taux before performing a final fine-tuning phase on T ∗. The motivation behind pre-training is that learning Taux first hopefully captures relevant information that can be utilized during training of T ∗. This desire has led to the proliferation of generalist pre-trained models like BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and GPT3 (Brown et al., 2020) that have been trained on copious amounts of data. Generalist models have been widely successful at improving downstream task performance when used as initilization.
We can formalize the pre-training procedure as follows:
θ0 = argminθ ( ∑ Ti∈Taux LTi(θ) )
(1)
In Equation 1, we seek a point θ0 that achieves minimal loss on the tasks in Taux. We hope that θ0 will be a good starting point for gradient descent on T ∗. Let g(θ0) represent the set of end-points of stochastic gradient descent on an initialization, θ0. Stochastic gradient descent from the same initialization can produce different end-points due to differences in hyper-parameters like learning rate, batch size and order, as well as regularization strength. We can write the fine-tuning phase as:
θ∗ = argmin{θ ∈ g(θ0)} LT∗(θ) (2)
Note that pre-training is end-task agnostic: the pre-training Equation 1 occurs entirely before training on the end-task Equation 2, and does not explicitly incorporate the end-task objective, T ∗. Since there is no awareness of the end-task during pre-training it is important to carefully choose Taux so that pre-training actually results in improved performance on T ∗ (Wang et al., 2018a). For text data, past work has found left-to-right language modeling (Peters et al., 2017) and masked language modeling (MLM) (Devlin et al., 2018) to be good choices to include in Taux.
2.2 CONTINUED PRE-TRAINING
Recent work (Beltagy et al., 2019; Gururangan et al., 2020; Lee et al., 2020) showed that downstream performance on T ∗ can be improved by further adapting generalist models via continued pre-training on a more relevant set of auxiliary tasks. This is equivalent to sequentially performing multiple steps of Equation 1, with different Taux, before finally performing Equation 2 on T ∗. Domain and Task Adaptive Pre-training Gururangan et al. (2020) present Domain Adaptive PreTraining (DAPT) and Task Adaptive Pre-Training (TAPT) as methods for continued pre-training. During DAPT, a generalist model is further pre-trained on an unsupervised objective with large amounts of data from the same domain as the end-task. TAPT also pre-trains with the same unsupervised objective as DAPT, but on the actual dataset of the end-task. Gururangan et al. (2020) find that performance can be further improved by chaining objectives, DAPT first, followed by TAPT.
Though TAPT and DAPT do not directly incorporate the end-task objective during training, it still indirectly informs both the choice of pre-training data and the order in which the pre-training tasks are trained on. Below, we explore stronger versions of this influence.
3 END-TASK AWARE TRAINING (TARTAN)
In this section, we argue for the end-task to be added directly into the training process to create explicit interactions between T ∗ and Taux.
3.1 END-TASK AWARE TRAINING VIA MULTI-TASKING (MT-TARTAN)
We propose to directly incorporate knowledge of the end-task by multi-tasking T ∗ together with Taux, before optionally fine-tuning on T ∗ exclusively. To this end, we introduce a set of task weights w = (w∗, w1, · · · , w|Taux|) satisfying w∗ + ∑ i wi = 1, to modulate between the different losses. Our new formulation is:
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ i wiLTi(θ) )
(3)
Here, Equation 3 replaces Equation 1 and can be followed by the optional fine-tuning stage of Equation 2. Note that this formulation fixes the tasks weights w throughout the training process. We call this formulation End-task Aware Training via Multi-tasking (MT-TARTAN) because we introduce the end-task directly into the training procedure, and do so by multi-tasking it with Taux.
MT-TARTAN allows us to prioritize performance on T ∗ in several ways. First, we can weight the end-task higher than all the other auxiliary tasks. Also, during training, we can monitor LT∗ on the end-task validation set and early stop when it plateaus; even if the auxiliary tasks have not yet converged. This is not possible during standard pre-training because we do not train T ∗ and so it performs at random before we actually start fine-tuning. Early stopping on T ∗ can represent significant computational savings over end-task agnostic pre-training when the savings in data-efficiency supercede the extra overhead of end-task aware gradient descent steps.
3.2 END-TASK AWARE TRAINING VIA META-LEARNING (META-TARTAN)
MT-TARTAN, DAPT and TAPT, all share the same drawback: they implicitly assume that the auxiliary tasks have static importance to the end-task over the lifetime of its training, either by being end-task agnostic (DAPT and TAPT) or by having static task weights (MT-TARTAN). With MTTARTAN, an additional drawback noted by Wang et al. (2019); Yu et al. (2020) is that multi-tasking can negatively impact task performance compared to isolated training. These shortcomings motivate the formulation of an adaptive algorithm that can mitigate the negative influence of some tasks whilst responding to the changing relevance of auxiliary tasks over the lifetime of end-task training.
As they stand, the pre-training equation pair (Equations 1, 2) and the MT-TARTAN pair (Equations 2, 3) are decoupled. The inner-level variables of the pre-training phase do not depend on the outerlevel variables of the fine-tuning phase. Thus the equation pairs are typically solved sequentially. We propose to tightly couple Equations 2 and 3 by formulating jointly learning w and θ0 as a bi-level optimization problem. A bi-level formulation allows us to leverage meta-learning (Schmidhuber, 1995) techniques to learn adaptive task weights which capture variable auxiliary task importance whilst mitigating the contribution of harmful tasks. We propose a meta-learning algorithm in the mold of Model Agnostic Meta-Learning (MAML) (Finn et al., 2017) to learn task weights. As a bi-level problem, this can be formulated as :
θ∗,w∗ = argmin{θ ∈ g(θ0), w} LT∗(θ) (4) where
θ0 = argminθ Ltotal(θ,w) = argminθ ( w∗LT∗(θ) + ∑ Ti∈Taux wiLTi(θ) )
(5)
We want to jointly learn w, with θ0, such that taking a gradient descent step modulated by w leads to improvement in end-task generalization. We use performance on the end-task validation set (DvalT∗ ) as a meta-objective to train w. Performance on DvalT∗ serves as a stand-in for end-task generalization performance whilst also naturally capturing the asymmetrical importance of T ∗.
Our joint descent algorithm proceeds as follows. At each timestep t, we hold the task weights fixed and update θt based on ∇θLtotal(θt,w). We then proceed to update w via gradient descent on the end-task validation loss at θt+1. For this, we derive an approximation for∇wLvalT∗ (θt+1,w) below:
LvalT∗ (θt+1(w)) = LvalT∗ ( θt − β ( w∗∇LT∗ + ∑ i wi∇LTi ))
≈ LvalT∗ (θt)− β ( w∗∇LT∗ + ∑ i wi∇LTi )T ∇LvalT∗ (θt)
We can take the gradient of the above first-order approximation w.r.t an individual weight wi. This tells us how to update wi to improve the meta-objective.
∂LvalT∗ (θt+1(w)) ∂wi
≈ −β ( ∇LTi )T (∇LvalT∗ (θt)) = −β(∇LTi)T (∇LvalT∗ ([θbody, φ′]t)) (6) In Equation 6, we explicitly specify [ θbody, φ ′] t
because computing losses on T ∗ depend on only these parameters. LTi depends solely on [ θbody, φ i ] t but we leave this out to avoid notation clutter.
Our analysis above is similar to that of Lin et al. (2019) with one key difference: we learn a weighting for the main task w∗ too. This ability to directly modulate T ∗ allows us to capture the fact that at certain stages in training, auxiliary tasks may have greater impact on end-task generalization than the end-task’s own training data. This choice also allows us to control for over-fitting and the influence of bad (mislabelled or noisy) training data.
3.3 INTRODUCING A SEPARATE CLASSIFICATION HEAD FOR META-LEARNING
Observe that from Equation 6, updates forw 6= w∗ involve gradients computed from different model heads φi and φ′ whilst for w∗, we are taking the dot product of gradients from the same end-task head φ′. As we will show empirically in Section 5.4, computing weight updates this way creates a strong bias towards the primary task, causing w∗ to rail towards 1 whilst the other weights dampen to 0, which may be sub-optimal in the long run.
Intuitively, this short-horizon (greedy) (Wu et al., 2018) behavior makes sense: the quickest way to make short-term progress (improve LvalT∗ (θt+1)) is to descend solely on T ∗. More formally, the greedy approach arises because we derive ∇wiLvalT∗ (θt+1) in Equation 6 as a proxy for the gradient at θ∗, the outer-loop end-point in Equation 4. Variations of this substitution are common in the meta-learning literature (Finn et al., 2017; Liu et al., 2018; Nichol et al., 2018) because it is computationally infeasible to train a model to convergence every time we wish to compute∇wiLvalT∗ (θ∗).
To remedy the greedy solution, instead of estimating ∇θLT∗ and ∇θLvalT∗ from the same classification head (Equation 6), we introduce a special head φ∗ for computing the meta-objective. Specifically, instead of trying to compute θ∗, we approximate it by fixing the body of the network θbody and training the randomly initialized head φ∗ to convergence on a subset of the end-task training data. We do this every time we wish to estimate∇wiLvalT∗ (θ∗). Introducing φ∗ eliminates the strong positive bias on w∗ and enables us to compute a better proxy for the meta-gradient at θ∗:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t)) (7) Equation 7 represents a simple-to-implement alternative to Equation 6. We provide a more detailed justification for Equation 7 in Appendix A.1. In Section 5.4, we empirically validate that the transition from Equation 6 to 7 improves performance whilst mitigating pathological solutions. Our approach of creating φ∗ for approximating the meta-objective (down-stream validation performance) is inspired by Metz et al. (2018), who use a similar technique to construct a meta-objective for evaluating the quality of unsupervised representations.
Please see Algorithm 1 in Appendix A.3 for details about META-TARTAN.
4 EXPERIMENTAL SETUP
Setting1 Though our algorithms and methodology can be directly applied to both continued pretraining (Section 2.2) and pre-training from scratch (Section 2.1) of generalist models, we focus on the former scenario. This is because the continued pre-training setting is more common amongst everyday practitioners as it is less computationally demanding. It thus lends itself more easily to exploration under a realistic computational budget. In Appendix A.4, we show that end-task aware training from scratch is viable by studying a simple computer vision setting. Concurrent work by Yao et al. (2021) shows that from-scratch end-task aware training for NLP problems is viable.
1Code will be released at https://github.com/ldery/TARTAN
In keeping with previous work (Devlin et al., 2018; Gururangan et al., 2020), we focus on Taux as a set of MLM tasks on varied datasets. In the case of DAPT and our end-task aware variants of it, Taux is an MLM task with data from the domain of the end-task. For TAPT, Taux is an MLM task with data from the end-task itself. DAPT, TAPT and DAPT+TAPT (chained pre-training with DAPT followed by TAPT) will serve as our baseline continued pre-training approaches. We will compare these baselines to their end-task aware variants that use MT-TARTAN and META-TARTAN.
Datasets Our experiments focus on two domains: computer science (CS) papers and biomedical (BIOMED) papers. We follow Gururangan et al. (2020) and build our CS and BIOMED domain data from the S2ORC dataset (Lo et al., 2019). We extract 1.49M full text articles to construct our CS corpus and 2.71M for our BIOMED corpus. Under both domains, our end-tasks are low-resource classification tasks. Using low-resource tasks allows us to explore a setting where pre-training can have a significant impact. Under the CS domain, we consider two tasks: ACL-ARC (Jurgens et al., 2018) and SCIERC (Luan et al., 2018). ACL-ARC is a 6-way citation intent classification task with 1688 labelled training examples. For SCIERC, the task is to classify the relations between entities in scientific articles. This task has 3219 labelled examples as training data. We choose CHEMPROT (Kringelum et al., 2016) as the classification task from the BIOMED domain. This task has 4169 labelled training examples and the goal is to classify chemical-protein interactions. More details of these datasets can be found in Table 2 of Gururangan et al. (2020). Gururangan et al. (2020) evaluate against all 3 tasks and their available code served as a basis on which we built MT-TARTAN and META-TARTAN.
Model Details We use a pre-trained RoBERTabase (Liu et al., 2019) as the shared model base and implement each task as a separate multi-layer perceptron (MLP) head on top of this pre-trained base. As in Devlin et al. (2018), we pass the [CLS] token embedding from RoBERTabase to the MLP for classification.
Training Details For DAPT and TAPT, we download the available pre-trained model bases provided by Gururangan et al. (2020). To train thier corresponding classification heads, we follow the experimental setup described in Appendix B of Gururangan et al. (2020).
Performing end-task aware training introduces a few extra hyper-parameters. We fix the other hyperparameters to those used in Gururangan et al. (2020). MT-TARTAN and META-TARTAN introduce joint training of a classification head for the end-task T ∗. We experiment with batch sizes of 128, 256 and 512 for training this head. We try out learning rates in the set {10−3, 10−4, 10−5} and drop out rates of {0.1, 0.3}. For META-TARTAN since we are now learning the task weights, w, we test out task weight learning rates in {10−1, 5 × 10−2, 3 × 10−2, 10−2}. Note that for all MTTARTAN experiments we use equalized task weights 1|Taux|+1 . A small grid-search over a handful of weight configurations did not yield significant improvement over the uniform task weighting. We use the Adam optimizer (Kingma & Ba, 2014) for all experiments.
As mentioned in section 3.3, we train as separate meta-classification head, φ∗, to estimate the validation meta-gradients. To estimate φ∗, we use batch sizes of {16, 32} samples from T ∗’s train set. We regularize the meta-head with l2 weight decay and set the decay constant to 0.1. We use a learning rate 10−3 to learn the meta-head. We stop training φ∗ after 10 gradient descent steps.
5 RESULTS AND DISCUSSION
In this section, we will discuss the results of comparing our models against DAPT and TAPT baselines.2 Broadly, we demonstrate the effectiveness of end-task awareness as improving both performance and data-efficiency.
2Our results are slightly different from those presented in Table 5 of Gururangan et al. (2020) in terms of absolute values but the trends observed there still hold here. We attribute these differences to (1) minor implementation differences, and (2) averaging performance over ten seeds instead of five as used in the original paper in order to more strongly establish statistical significance. We observe slightly lower performance on ACL-ARC and SCIERC tasks due to these changes and higher performance on CHEMPROT.
Domain Task RoBERTa TAPT MT-TARTAN META-TARTAN
5.1 END-TASK AWARENESS IMPROVES OVER TASK-AGNOSTIC PRE-TRAINING
Table 1 compares TAPT to its end-task aware variants. As in Gururangan et al. (2020), we observe that performing task adaptive pre-training improves upon just fine-tuning RoBERTa. However, note that introducing the end-task by multi-tasking with the TAPT MLM objective leads to a significant improvement in performance. This improvement is consistent across the 3 tasks we evaluate against. We find that both MT-TARTAN and META-TARTAN achieve similar results in this setting.
5.2 END-TASK AWARENESS IMPROVES DATA-EFFICIENCY
Gururangan et al. (2020) train DAPT on large amounts of in-domain data to achieve results competitive with TAPT. They use 7.55 billion tokens for the BIOMED domain and 8.10 billion for the CS domain. This is on average over 104× the size of the training data of our end-tasks of interest. The large amount of data required to train a competitive DAPT model represents a significant computational burden to the every-day practitioner. This begets the question: are such large amounts of auxiliary data necessary for achieving good downstream performance? To answer this, we train DAPT and its TARTAN version on variable amounts of data for both SCIERC and ACL-ARC tasks.
TARTAN is more data-efficient than DAPT In Figure 2, we focus on training on a small fraction of available domain data n = {100, 101}× |Train| for the DAPT auxiliary task. Full domain data is n′ ≈ 104×|Train|. This relatively low auxiliary data regime represents a realistic setting that is akin to those encountered by everyday practitioners who are likely to be computationally constrained. As can be seen in Figure 2, on the ACL-ARC task, META-TARTAN matches the performance of DAPT when the sizes of the domain data and end-task data are of the same order (100). At this data size, META-TARTAN supersedes DAPT on the SCIERC task. When trained on 10×more auxiliary data, META-TARTAN supersedes DAPT in performance on both tasks. On the ACL-ARC task, METATARTAN achieves 71.194.88, which is close to DAPT’s performance of 72.493.28 using more than 103× auxiliary data. These results indicate that end-task awareness can improve data-efficiency and in this case, improvements are on the order of 1000×.
Domain Task DAPT DAPT+TAPT MT-TARTAN META-TARTAN
TARTAN is more data-efficient than DAPT+TAPT Table 2 compares DAPT and DAPT+TAPT (DAPT followed by TAPT) to *-TARTAN which multi-task DAPT, TAPT and the end-task. MTTARTAN and META-TARTAN significantly outperform DAPT and DAPT+TAPT in 2 of the tasks whilst giving higher average performance in the ACL-ARC task. We thus conclude that end-task awareness allows us to get a greater performance boost out of the same amount of data.
We explore the data efficiency of TARTAN methods even further by comparing the relatively data-poor versions of MT-TARTAN and META-TARTAN above (n = 10 × |Train|) to the DAPT and DAPT+TAPT variants trained on all the available domain data (n′ ≈ 104 × |Train|). We can see from Table 3 that for the CS domain, our end-task aware variants come close to (ACL-ARC) and even supersede (SCIERC) the end-task agnostic variants though trained with ≈ 1000× less data. For BIOMED domain (CHEMPROT task), increasing the amount of data drastically improves the performance of end-task agnostic variants compared to MT-TARTAN and META-TARTAN trained on much less data.
Zhang et al. (2020) show that different tasks exhibit sigmoid-like curves in terms of how much pretraining data is required to achieve good results before performance levels off. We contextualize Tables 2 and 3 within said work and posit that the CHEMPROT task intrinsically requires much more data (compared to our other tasks) before performance begins to improve appreciably.
5.3 META-TARTAN MORE EFFECTIVELY UTILIZES OUT-OF-DISTRIBUTION AUXILIARY
DATA OVER MT-TARTAN
TARTAN ACL-ARC SCIERC CHEMPROT MT 69.270.96 81.530.99 80.263.79 META 71.194.88 82.081.19 82.310.75
heterogeneous domain data whose impact on the end-task performance is less clear. Notice from Table 4 that when required to rely solely on domain data for auxiliary tasking, META-TARTAN improves performance over MT-TARTAN. We attribute META-TARTAN’s improvement over MTTARTAN to its ability to more flexibly adapt to incoming data of variable utility to the end-task.
5.4 TASK WEIGHTING STRATEGIES DISCOVERED BY META-LEARNING
To illustrate the importance of the separate classification head φ∗ for computing the meta-signal for the task weights (described in Section 3.3), we run META-TARTAN experiments with ACL-ARC as the end-task and DAPT as the auxiliary task. We compare using either a separate (φ∗) or the same (φ′) classification head for calculating the meta-gradient. Figure 3 plots the task weightings learned in each setting during training. We can clearly see that using a separate head counteracts the pathological solution of down-weighting all tasks that are not T ∗ and as a result, improves performance: a delta of 1.7 F1 points in this case. The strategy discovered by META-TARTAN presents an interesting contrast to classical pre-training: whilst the initial phase of classical pre-training involves
solely the auxiliary task, early in training, META-TARTAN up-weights the auxiliary task but does not fully zero out the end-task. Later in training, we see leveling off of weights instead of railing the end-task to 1 as in classical pre-training.
Next, we plot a similar graph for using both DAPT and TAPT across our three tasks in Figure 4. From the figure, it is apparent that META-TARTAN discovers similar task-weighting strategies across different end-tasks. This suggests that the MLM objective and META-TARTAN’s strategy for learning task weights are generic enough to induce similar behaviours across tasks. In general, DAPT is significantly up-weighted compared to the end-task and TAPT. Note that the TAPT + ACL-ARC task weights (Figure 4) has the same approximate trajectory as ACL-ARC task weight in Figure 3. It seems important to assign high weight to the task data (Figure 3) but not necessarily all of it needs to go to the actual task loss (Figure 4). We hypothesize that the diversity in the domain data counteracts overfitting to the end-task data and results in DAPT being up-weighted.
6 RELATED WORK
Multi-task learning can be traced back to seminal work by Caruana (1995), Caruana (1997), and has since been the subject of a flourishing literature, recent surveys of which can be found in Ruder (2017) or Zhang & Yang (2021). In NLP, while initial work from Collobert & Weston (2008) already showed the benefits of multi-task learning, it has only recently become a central topic in the field, with the advent of multi-task benchmarks (Wang et al., 2018b; McCann et al., 2018).
Pre-training is where a machine learning model is first trained on a generic, data-rich task before being fine-tuned on an end-task. In NLP this practice dates back to the use of pre-trained word embeddings (Turian et al., 2010; Mikolov et al., 2013) and later pre-trained encoders (Kiros et al., 2015; Dai & Le, 2015). Peters et al. (2018) and Howard & Ruder (2018) heralded a renaissance of pre-training before BERT (Devlin et al., 2018) and its many offshoots (Liu et al., 2019; Yang et al., 2019; Lewis et al., 2019) cemented it as the de facto standard for modern NLP.
Meta-learning dates back to early work from Schmidhuber (1995); Thrun (1998). More relevant to our work is gradient-based meta-learning for solving bi-level optimization problems, first popularized by Finn et al. (2017) and followup work (Nichol et al., 2018; Rajeswaran et al., 2019) for few-shot learning. This method has transferred to a variety of applications such as architecture search (Liu et al., 2018) and model poisoning (Kurita et al., 2020).
7 CONCLUSION
We have advocated for a paradigm shift in the way we approach pre-training. We have motivated making pre-training more end-task aware when the end-task is known in advance. Our work introduced two novel end-task aware training algorithms: End-task Aware Training via Multitasking (MT-TARTAN) and End-task Aware Training via Meta-learning (META-TARTAN). In Section 5, we demonstrated the ability of our proposed algorithms to improve performance and dataefficiency over their end-task agnostic counterparts.
This work suggests several promising directions for future work. Instead of learning coarse task level weights, can further performance improvements be achieved via finer-grained example level weighting as in Wang et al. (2020)? Can meta-learning algorithms like META-TARTAN enable more effective utilization of previously discarded (Aroca-Ouellette & Rudzicz, 2020) pre-training auxiliary tasks like Next Sentence Prediction (NSP) (Devlin et al., 2018)? We hope this work spurs conversation around these questions and many more.
8 ACKNOWLEDGEMENTS
This work was supported in part by DSO National Laboratories, an ENS-CFM Data Science Chair, DARPA FA875017C0141, the National Science Foundation grants IIS1705121, IIS1838017, IIS2046613 and IIS-2112471, an Amazon Web Services Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., and a Block Center Grant. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies.
9 ETHICS STATEMENT
Our work introduces new algorithms but leverages pre-existing datasets and models. Overall, this work inherits some of the risk of original work upon which it is implemented. Algorithms for continued training such as TAPT and DAPT necessitate per-task training of unsupervised objectives which result in corresponding green-house emissions due to energy consumption (Strubell et al., 2019). However, as shown in Sections 3 and 5, our new compute-efficient algorithms greatly increase the data efficiency of these algorithms, reducing these harms as well as the various harms associated with labor for data-collection (Jo & Gebru, 2020). Also, since our work is set in the context of pre-existing datasets and models (Section 4), we recognize that any ethical issues that have been revealed in these (such as bias (Bender et al., 2021) or privacy leakage (Carlini et al., 2021)) may also propagate to models trained using our work, and mitigation strategies such as Schick et al. (2021); Liang et al. (2021) may be necessary. Finally, there is a potential risk in META-TARTAN that leveraging a validation set for defining the meta-objective could amplifying bias that exists in this data split, although this is done indirectly through task weighting and hence we believe that this risk is small.
10 REPRODUCIBILITY STATEMENT
We pledge to release the source-code for this project to improve the ease of reproducibility of our results by the NLP and machine learning communities. In Section 4, we have specified details about datasets, training regimes and models to allow anyone who wishes to reproduce our results without our original source code to do so. Our discussion of the algorithmic and evaluation details can be found in Appendices A.1, A.3 and A.2. As we noted in 4, we build off of Gururangan et al. (2020)’s implementations which can be found at https://github.com/allenai/dont-stop-pretraining.
A APPENDIX
A.1 JUSTIFYING THE INTRODUCTION OF A META-HEAD
Proof. To arrive at Equation 7 we start with the closed form solution for ∇wiLvalT∗ (θ∗) and then introduce approximations in order to produce Equation 7. First, note that :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T( ∇wiθ∗(w) ) [Chain rule] (8)
To get ∇wiθ∗(w) we invoke the Cauchy Implicit Function Theorem (IFT) as with Lorraine et al. (2020); Navon et al. (2020); Liao et al. (2018):
∇wiθ∗(w) = [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θLtotal(θ∗(w)) ] [IFT]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇wi∇θ ( w∗LT∗(θ∗(w)) + ∑ Ti∈Taux wiLTi(θ∗(w)) )]
= [ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ] [Only terms with wi survive]
Bringing it all together, we get :
∂LvalT∗ (θ∗(w)) ∂wi =
( ∇θLvalT∗ (θ∗(w)) )T([ ∇2θLtotal(θ∗(w)) ]−1[ ∇θLTi(θ∗(w)) ]) (9)
Computing ∇wiLvalT∗ (θ∗) from Equation 9 is computationally unwieldy since we would not only have to optimize θ to convergence for every step of wi but we would also have to invert the Hessian of a typically large model. Our middle ground between Equations 9 and 6 (Equation 7) makes use of the following approximations:
• We approximate the inverse Hessian with the identity. This approximation is not new; we follow previous work like Lorraine et al. (2020)(Table 3) who explore the use of this approximation because of computational efficiency.[
∇2θLtotal(θ∗(w)) ]−1
= lim i→∞ i∑ j=0 ( I−∇2θLtotal(θ∗(w)) )j ≈ I
We are assuming the contribution of terms with i > 0 are negligible.
• Instead of training the whole network to convergence, at each time-step, we fix the body of the network and train a special head φ∗ to convergence on a small batch of end-task training data. We then use [θbody;φ∗] as a proxy for θ∗. This is a computationally feasible workaround to training all of θ to convergence to get a single step gradient estimate. Especially in the continued pre-training setting where a pre-trained generalist model like BERT is used as θbody, this approximation is reasonable. To our knowledge, we are the first to suggest this approximation.
∇θLvalT∗ (θ∗)→ ∇θLvalT∗ ([θbody;φ∗])
• Above, we have approximated θ∗ = [θbody;φ∗]. Since φ∗ is only used to evaluate end-task (T ∗) validation data, it means θ remains unchanged with respect to the training data for task Ti. Thus ∇θLTi([θbody; ( φ∗, . . . , φi ) ]) = ∇θLTi([θbody;φi]) = ∇θLTi(θ)
Bringing it all together, we get Equation 7, repeated here:
∂LvalT∗ (θ∗(w)) ∂wi
≈ ( ∇θLTi )T (∇θLvalT∗ ([θbody;φ∗]t))
A.2 CALCULATING P-VALUES FROM PERMUTATION TEST
We used the permutation test (Good, 2005; Dror et al., 2018) to test for statistical significance. For each test, we generate 10000 permutations to calculate significance level. This is sufficient to converge to a stable p-value without being a computational burden. We chose this over the common student t-test because :
1. We have only 10 runs per algorithm and permutation tests are more robust at low sample size
2. Permutation test is assumption free. Student t-tests assume that the samples are normally distributed
3. Permutation test is robust to variance in the samples, so even though error-bars can overlap, we still establish significant differences in the samples. Variance in our results is expected due to small dataset sizes of end-tasks.
A.3 ALGORITHM FOR META-TARTAN
Algorithm 1: End-task Aware Training via Meta-learning (META-TARTAN) Require: T ∗,Taux: End-task, Set of auxiliary pre-training tasks Require: η, β1, β2: Step size hyper-parameters Initialize :
Pre-trained RoBERTa as shared network body, θbody Task weightings: w∗, wi = 1|Taux|+1
Randomly initialize : end-task head as φ′ meta head for end-task as φ∗ task head, φi, for each Ti ∈ Taux while not done do B∗tr ∼ T ∗train // Sample a batch from end-task
g∗θ , g ∗ φ ← [ ∇θ,∇φ′ ]( LT∗(θ, φ′, B∗tr) ) // Get end-task grads
giθ, g i φ ← [ ∇θ,∇φi ]( LTi(θ, φi, Bi) ) // Get task grads. ∀i ∈ [n], Bi ∼ Ti
// Learn a new meta head φ∗ ← estimate meta head(B∗tr, β2, θ, φ∗) // B∗tr ∼ T ∗train g∗meta ← ∇θLT∗(θ, φ∗, B∗val) // B∗val ∼ T ∗val // Update task weightings w∗ ← w∗ + η cos(g∗meta, g∗θ) wi ← wi + η cos(g∗meta, giθ) // Update task parameters α∗, α1, . . . , α|Taux| = softmax(w ∗, w1, . . . , w|Taux|)
Update θbody ← θbody − β1 ( α∗g∗θ + ∑ i αig i θ ) Update ( φi ← φi − β2giφ ) , ( φ′ ← φ′ − β2g∗φ
) end Result : θ, φ′
A.4 VISION EXPERIMENTS
We validate that the gains from end-task Aware Training are not siloed to only learning from text. We conduct an experiment comparing end-task aware training on images to its end-task agnostic variant. We use the Cifar100 dataset (Krizhevsky et al., 2009). We use the Medium-Sized Mammals superclass (one of the 20 coarse labels) as our main task whilst the other 19 super classes are used as auxiliary data. Our primary task is thus a 5-way classification task of images different types of
medium-sized mammals whilst whilst the remaining 95 classes are grouped into a single auxiliary task.
As can be seen from Table 5, being end-task aware improves over task agnostic pre-training. We find that, again, when our auxiliary task consist of solely domain data and no task data, METATARTAN performs better than MT-TARTAN (as measured by averaged performance).
A.5 FULL TAPT TABLE WITH SIGNIFICANCE LEVELS
We repeat Table 1 and provide details about levels of statistical signifance.
Task TAPT MT-TARTAN p−values META-TARTAN p−values
Task TAPT META-TARTAN p−values
A.6 FULL DAPT/DAPT+TAPT TABLE
We repeat Table 3 and provide details about levels of statistical signifance.
A.7 FAQ
1. What settings are TARTAN algorithms designed for? TARTAN algorithms specialize auxiliary objectives to a particular end-task. This comes at a risk of losing the generic representations afforded by generalist pre-trained models. Thus if a practitioner has a sufficiently important end-task where obtaining improved end-task performance is paramount over generic representations, then TARTAN is a viable option.
2. When do we get computational savings from META-TARTAN? MT-TARTAN does not add any extra overhead compared to pre-train then fine-tune approaches. META-TARTAN however, adds extra overhead per gradient descent step due to computing meta-gradients. However, as shown in Section 5 we are able to get several orders of magnitude improvement in data-efficiency from applying the method. In general,
for the tasks we experimented with, we find that the savings in data-efficiency superseded the extra per-timestep meta-learning overhead.
3. When should we use META-TARTAN over MT-TARTAN? In +TAPT settings (Tables 1, 3), we observe that META-TARTAN and MT-TARTAN perform similarly. We attribute this to the strength of TAPT-MLM objective. We were pleasantly surprised that the two methods performed comparatively in this setting but in hindsight, we appreciate the insight that went into designing TAPT-MLM as an objective which makes it a strong baseline. In other settings with less carefully designed auxiliary objectives and data (which can potentially be detrimental to the end-task) we expect METATARTAN to perform better. Section 5.3 provides evidence of this. | 1. What is the focus of the paper regarding pre-training and multi-task setup?
2. What are the strengths of the proposed approach, particularly in terms of experiment results?
3. Do you have any concerns about the framing of the paper, especially regarding the training method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as comparing the proposed method with a model that is not pre-trained? | Summary Of The Paper
Review | Summary Of The Paper
The paper makes the argument that generic pre-training (on auxiliary tasks) is inferior to task-specific pre-training. The authors argue that the final task should be learned together with the auxiliary tasks in a multi-task setup, which they call MT-TARTAN. They also propose a meta-learning algorithm META-TARTAN that uses meta-learning to mitigate the potential impact of updates from auxiliary tasks that detract from the main task.
The authors back this up through experiments. They consider three tasks on CS and Biomed papers, where they train 1) the main task on top of a RoBERTa checkpoint 2) the main task on top of a pre-trained model from Gururangan et al. (2020) and 3) the main task mixed-in with auxiliary tasks from (2) on top of a RoBERTa checkpoint. They observe that their approaches (3) outperform (1) and (2), both in terms of model accuracy and data efficiency. In a setting where the auxiliary tasks are potentially more noisy, they also observe an advantage of META-TARTAN over MT-TARTAN.
Overall their work heavily references Gururangan et al. (2020), who show that adapting a generic model to the task domain through more task-specific pre-training can improve model performance. They make it directly comparable by using their models and a subset of tasks considered in that paper.
Review
The paper is clearly written and well structured.
I believe the question that the authors actually address, whether auxiliary tasks should be used separately or in conjunction with the main task, is important, and their results should be of interest to the community.
However, I think the general framing of their paper in abstract / introduction is misleading. At no point do they train a model from scratch (i.e. without pre-training) with their proposed methods. They do justify this with the high cost of pre-training and the convenient availability of pre-trained models, which ironically would be my main criticisms of actually foregoing generic pre-training. So although they raise the question whether pre-training is necessary, they then don’t actually compare against a model that is not pre-trained. Rather, they show that after pre-training it might not be necessary to further pre-train on large amounts of data just for domain adaptation.
I think the paper would be much stronger if they did not defer their main question (“Should we be pre-training?”) to future work, but rather tested their method with the typical MLM auxiliary task on a newly initialized Transformer model. |
ICLR | Title
Epitomic Variational Autoencoders
Abstract
In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
1 INTRODUCTION
Unsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an approach to unsupervised learning wherein an explicit stochastic generative model of data is defined, such that independent draws from this model are likely to produce the original data distribution, while the learned latent structure itself is useful in prediction, classification and visualization tasks.
The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).
A commonly known problem with the VAE lower bound is that it is known to self-prune or under utilize the model’s capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization techniques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed discussion is provided in § 2.1. In this paper, we take a model-based approach to directly address this problem. We present an extension of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE, for short) that automatically learns to utilize its model capacity more effectively, leading to better generalization. Consider the task of learning a D-dimensional representation for the examples in a given dataset. The motivation for our model stems from the hypothesis that a single example in the dataset can be sufficiently embedded in a smaller K-dimensional (K D) subspace of D. However, different data points may need different subspaces, hence the need for D. Sparse coding methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional categorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable activates only a contiguous subset of latent stochastic variables to generate an observation. This ∗Work done during an internship at Facebook AI Research.
enables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epitomic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.
The rest of the paper is organized as follows. We first describe variational autoencoders and mathematically show the model pruning effect in § 2. We then present our epitomic VAE model in § 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in § 4. We finally provide more general context of our work in the related work in § 5, and conclude with discussions.
2 VARIATIONAL AUTOENCODERS
The generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian
p(z) = N (z; 0; I) (1) and then generating the N-dimensional observation x from a parametric family of distributions such as a Gaussian pθ(x|z) = N (x; f1(z); exp(f2(z))) (2) where f1 and f2 define non-linear deterministic transformations of z modeled using a neural network. The parameters θ of the model are the weights and biases of the neural network that encodes the functions f1 and f2.
Given a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood of the parameters to have generated the data, p(X|θ). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the zi when conditioned on x.
Variational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parameters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters φ that outputs the posterior distribution of the form qφ(z|x) = ∏ d q(zi|x). This results in the lower bound given by
log pθ(X) = T∑ t=1 log ∫ z pθ(x (t), z) (3)
≥ T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)]−KL ( qφ(z|x(t)) ‖ p(z) ) (4)
VAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound
Cvae = − T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)] + T∑ t=1 D∑ i=1 KL ( qφ(zi|x(t)) ‖ p(zi) ) (5)
2.1 AUTOMATIC MODEL OVER-PRUNING IN VAE
Cvae introduces a trade-off between data reconstruction (first term) and satisfying the independence assumption of p(z) (second term, KL).
Of particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In particular, the model needs to only ensure that the overall KL term is minimized, on average, and not per component wise. The easiest way for the model to do this is to have a large number of components that satisfies the KL term effectively, by turning off the units so that the posterior for those units becomes the same as the prior1. This effect is quite pronounced in the early iterations of
1Since log variance is modeled using the neural network, turning it off will lead to a variance of 1.
training: the model for log p(x|z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almost impossible for them to resurrect, and hence the full capacity of the model is not utilized.
A quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or “active”, if Au = Covx(Eu∼q(u|x)[u]) > 0.02.
A commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter λ so that the cost is
C = −Eqφ(z|x)[log p(x|z)] + λ D∑ i=1 KL ( qφ(zi|x) ‖ p(zi) ) (6)
Fig. 1 shows the effect of λ on unit activity and generation, with λ = 1 being the correct objective to optimize. While tuning down λ increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units only, for λ = 1. The model spends its capacity in ensuring that reconstruction of the training set is optimized (reconstruction visualizations are shown in § 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for λ (Bowman et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the latent units (Kingma et al., 2016).
In this paper, we present a model based approach called “epitomic variational autoencoder” to address the problem of over pruning.
3 MODEL
We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensional subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick-
ness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions of D are needed to capture the variability in strokes of some digits (see Fig. 3).
Epitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous dimensions of D are active2.
The generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N (z; 0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N -dimensional observation x is then drawn from a Gaussian distribution:
pθ(x|y, z) = N (x; f1(my z), exp(f2(my z))) (7) my enforces the epitome constraint: it is also aD-dimensional vector that is zero everywhere except in the active dimensions of the epitome. is element-wise multiplication between the two operands. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our model generalizes the VAE and collapses to a VAE when D = K = s.
f1( ) and f2( ) define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but instead deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the
2The model also allows for incorporating other forms of structured sparsity. 3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.
sparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones, which manifests itself in the representation space; an intrinsic ordering in the variability is learned.
3.1 OVERCOMING OVER-PRUNING
Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate posterior inference, with the functional form
q(z, y|x) = q(y|x)q(z|y,x) (8) = q(y|x)N (z;my µ, exp (my φ)) (9)
where µ = h1(x) and φ = h2(x) are neural networks that map x to D dimensional space.
We use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z|y,x). As in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cost function (negative bound) is
Cevae = − T∑ t=1 Eq(z,y|x(t))[log p(x (t)|y, z)]
− T∑ t=1 KL [ qφ(y|x(t)) ‖ pθ(y) ] − T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (10)
The epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:
T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (11)
= T∑ t=1 ∑ y qφ(y|x(t))KL [ N (z;my µ(t), exp (my φ(t))) ‖ N (z;0, I) ] (12)
= T∑ t=1 ∑ y qφ(y|x(t)) D∑ d=1 1[md,y = 1]KL [ N (zd;µ(t)d , exp(φ (t) d )) ‖ N (0, 1) ] (13)
where 1[?] is an indicator variable that evaluates to 1 if only if its operand ? is true.
For a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as the prior. By design, this ensures that only the K dimensions that explain x(t) contribute to Cevae.
This is quite in contrast to how VAE optimizes Cvae (§. 2.1). For Cvae to have a small contribution from the KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction of examples in the training set contributes a possible non-zero value to zd’s KL term in Cevae. This added flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different epitomes, leading to a more balanced use of z.
In Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAE, our model is able to better use the model capacity. In the same figure, we also compare with adding dropout to the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along with the explanation in § 4.1 where we compare generation results for all three models.
3.2 TRAINING
The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10.
For the stochastic continuous variable z, we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary variables.
For the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x) by a point estimate y∗ so that q(y|x) = δ(y = y∗), where δ evaluates to 1 only if y = y∗ and the best y∗ = argmin Cevae. We also explored modeling q(y|x) = Cat(h(x)) as a discrete distribution with h being a neural network. In this case, the backward pass requires either using REINFORCE or passing through gradients for the categorical sampler. In our experiments, we found that these approaches did not work well, especially when the number of possible values of y becomes large. We leave this as future work to explore.
The recognition network first computes µ and φ. It is then combined with the optimal y∗ for each example, to arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropagation with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment.
Algorithm 1 Learning Epitomic VAE 1: θ, φ←Initialize parameters 2: for until convergence of parameters (θ, φ) do 3: Assign each x to its best y∗ = argmin Cevae 4: Randomize and then partition data into minibatches with each minibatch having proportion-
ate number of examples ∀ y 5: for k ∈ numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for
4 EXPERIMENTS
We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database (TFD) (Susskind et al., 2010). We show generation results that illustrate eVAE’s ability to better utilize model capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Finally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize that in all experiments, we keep the weight of the KL term λ = 1 to evaluate performance under optimizing the true derived lower bound, without introducing an additional hyperparameter to tune.
We use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fullyconnected networks, and we show results for different depths and number of units of per layer. ReLU nonlinearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs (MNIST) and 250 epochs (TFD), with base learning rate 0.001.
4.1 OVERCOMING OVER-PRUNING.
We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to model greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for different dimensionsD of latent variable z. WithD = 2, VAE generates realistic digits but suffers from lack of diversity. When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality. As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due to VAE’s propensity to use only a portion of its latent units for modeling the training data and the rest to minimize the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model the space of possible generations. The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.
Adding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout, the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased
while maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast, the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased.
4.2 CHOICE OF EPITOME SIZE
We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the generative models quantitatively through their samples by measuring the log-density with a Parzen window estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are nonoverlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed so that the total number of parameters is comparable to eVAE.
As we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, forD values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides more comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.
eVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of parameter sharing in eVAE is that each epitome can also benefit from general features learned across the training set.
4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER
Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer.
Table 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping.
We observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple
layers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned.
In contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the number of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling in generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.
Table 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage on smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.
H = 500 H = 1000 L = 1 L = 2 L = 3 L = 1 L = 2 L = 3
MNIST
D = 8 VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8)
D = 24 VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24)
D = 48 VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48)
TFD
4.4 COMPARISON WITH OTHER MODELS
In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density. VAE−, mVAE−, and eVAE− refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. Encoders and decoders have L = 2 layers of H = 1000 deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L,H,D) = (3, 500, 8), mVAE is (3, 1000, 24), and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500, 15), mVAE is (3, 1000, 50), and eVAE is (3, 500, 25).
We observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models, notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7.
5 RELATED WORK
A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generative model for images is proposed in which the generator of the VAE is an attention-based recurrent model that is conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative model that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that is updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants of VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xue et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopted strategies that takes away the clean mathematical formulation of VAE. We have discussed these in § 2.1.
A complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.
Methods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende & Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over projections of stochastic latent variables. However, the problem of over pruning still persists: for instance, Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used.
Related is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data.
6 CONCLUSION
This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of model over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on the intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAE models the latent space as multiple shared subspaces that have learned specializations. We show how this model addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative analysis of how eVAE enables increased utilization of the model capacity to model greater data variability. We believe that modeling the latent space as multiple structured subspaces is a promising direction of work, and allows for increased effective capacity that has potential to be combined with methods for increasing the flexibility of posterior inference.
7 ACKNOWLEDGMENTS
We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc’Aurelio Ranzato, Joost van Amersfoort and Ross Girshick. We also borrowed the term ‘epitome’ from an earlier work of Jojic et al. (2003).
8 APPENDIX
8.1 EFFECT OF KL WEIGHT λ ON RECONSTRUCTION
We visualize VAE reconstructions as the KL term weight λ is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.
8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION
In § 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAE models. Here we show the effect of the same factor on reconstruction quality for the models. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimension of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), but generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAE is able to achieve both good reconstruction and generation.
8.3 EVALUATION METRIC FOR GENERATION
There have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.
Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimensionD of latent variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decreases. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases, counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample quality also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decreases and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples for D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerate or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not the best-performing models reported in our experiments.) Since our work is motivated by the generation task, we therefore use log-density as the evaluation metric in our experiments.
Intuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 10). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand, eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight λ in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper. | 1. What is the main contribution of the paper, and how does it address the problem of latent variable over pruning in VAEs?
2. How does the proposed epitomic VAE differ from a mixture of VAEs, and what are the implications of this difference for the model's performance?
3. Why is the experimental section misleading, and what specific issues are present in the evaluation of log-likelihood and the presentation of results for binary and continuous MNIST?
4. What is the reviewer's opinion on the use of the term "overfitting" in the paper, and how does it relate to the claimed advantage of the proposed model over traditional VAEs?
5. What questions do you have regarding the implementation and behavior of dropout in the dropout VAE variant, and how might this impact the interpretation of the results? | Review | Review
The paper presents a version of a variational autoencoder that uses a discrete latent variable that masks the activation of the latent code, making only a subset (an "epitome") of the latent variables active for a given sample. The justification for this choice is that by letting different latent variables be active for different samples, the model is forced to use more of the latent code than a usual VAE.
While the problem of latent variable over pruning is important and has been highlighted in the literature before in the context of variational inference, the proposed solution doesn't seem to solve it beyond, for instance, a mixture of VAEs. Indeed, a mixture of VAEs would have been a great baseline for the experiments in the paper, as it uses a categorical variable (the mixture component) along with multiple VAEs. The main difference between a mixture and an epitomic VAE is the sharing of parameters between the different "mixture components" in the epitomic VAE case.
The experimental section presents misleading results.
1. The log-likelihood of the proposed models is evaluated with Parzen window estimator. A significantly more accurate lower bound on likelihood that is available for the VAEs is not reported. In reviewer's experience continuous MNIST likelihood of upwards of 900 nats is easy to obtain with a modestly sized VAE.
2. The exposition changes between dealing with binary MNIST and continuous MNIST experiments. This is confusing, because these versions of the dataset present different challenges for modeling with likelihood-based models. Continuous MNIST is harder to model with high-capacity likelihood optimizing models, because the dataset lies in a proper subspace of the 784-dimensional space (some pixels are always or almost always equal to 0), and hence probability density can be arbitrarily large on this subspace. Models that try to maximize the likelihood often exploit this option of maximizing the likelihood by concentrating the probability around the subspace at the expense of actually modeling the data. The samples of a well-tuned VAE trained on binary MNIST (or a VAE trained on continuous MNIST to which noise has been appropriately added) tend to look much better than the ones presented in experimental results.
3. The claim that the VAE uses its capacity to "overfit" to the training data is not justified. No evidence is presented that the reconstruction likelihood on the training data is significantly higher than the reconstruction likelihood on the test data. It's misleading to use a technical term like "overfitting" to mean something else.
4. The use of dropout in dropout VAE is not specified: is dropout applied to the latent variables, or to the hidden layers of the encoder/decoder? The two options will exhibit very different behaviors.
5. MNIST eVAE samples and reconstructions look more like a more diverse version of 2d VAE samples/reconstructions - they are blurry, the model doesn't encode precise position of strokes. This is consistent with an interpretation of eVAE as a kind of mixture of smaller VAEs, rather than a higher-dimensional VAE. It is misleading to claim that it outperforms a high-dimensional VAE based on this evidence.
In reviewer's opinion the paper is not yet ready for publication. A stronger baseline VAE evaluated with evidence lower bound (or another reliable method) is essential for comparing the proposed eVAE to VAEs. |
ICLR | Title
Epitomic Variational Autoencoders
Abstract
In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
1 INTRODUCTION
Unsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an approach to unsupervised learning wherein an explicit stochastic generative model of data is defined, such that independent draws from this model are likely to produce the original data distribution, while the learned latent structure itself is useful in prediction, classification and visualization tasks.
The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).
A commonly known problem with the VAE lower bound is that it is known to self-prune or under utilize the model’s capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization techniques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed discussion is provided in § 2.1. In this paper, we take a model-based approach to directly address this problem. We present an extension of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE, for short) that automatically learns to utilize its model capacity more effectively, leading to better generalization. Consider the task of learning a D-dimensional representation for the examples in a given dataset. The motivation for our model stems from the hypothesis that a single example in the dataset can be sufficiently embedded in a smaller K-dimensional (K D) subspace of D. However, different data points may need different subspaces, hence the need for D. Sparse coding methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional categorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable activates only a contiguous subset of latent stochastic variables to generate an observation. This ∗Work done during an internship at Facebook AI Research.
enables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epitomic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.
The rest of the paper is organized as follows. We first describe variational autoencoders and mathematically show the model pruning effect in § 2. We then present our epitomic VAE model in § 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in § 4. We finally provide more general context of our work in the related work in § 5, and conclude with discussions.
2 VARIATIONAL AUTOENCODERS
The generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian
p(z) = N (z; 0; I) (1) and then generating the N-dimensional observation x from a parametric family of distributions such as a Gaussian pθ(x|z) = N (x; f1(z); exp(f2(z))) (2) where f1 and f2 define non-linear deterministic transformations of z modeled using a neural network. The parameters θ of the model are the weights and biases of the neural network that encodes the functions f1 and f2.
Given a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood of the parameters to have generated the data, p(X|θ). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the zi when conditioned on x.
Variational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parameters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters φ that outputs the posterior distribution of the form qφ(z|x) = ∏ d q(zi|x). This results in the lower bound given by
log pθ(X) = T∑ t=1 log ∫ z pθ(x (t), z) (3)
≥ T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)]−KL ( qφ(z|x(t)) ‖ p(z) ) (4)
VAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound
Cvae = − T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)] + T∑ t=1 D∑ i=1 KL ( qφ(zi|x(t)) ‖ p(zi) ) (5)
2.1 AUTOMATIC MODEL OVER-PRUNING IN VAE
Cvae introduces a trade-off between data reconstruction (first term) and satisfying the independence assumption of p(z) (second term, KL).
Of particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In particular, the model needs to only ensure that the overall KL term is minimized, on average, and not per component wise. The easiest way for the model to do this is to have a large number of components that satisfies the KL term effectively, by turning off the units so that the posterior for those units becomes the same as the prior1. This effect is quite pronounced in the early iterations of
1Since log variance is modeled using the neural network, turning it off will lead to a variance of 1.
training: the model for log p(x|z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almost impossible for them to resurrect, and hence the full capacity of the model is not utilized.
A quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or “active”, if Au = Covx(Eu∼q(u|x)[u]) > 0.02.
A commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter λ so that the cost is
C = −Eqφ(z|x)[log p(x|z)] + λ D∑ i=1 KL ( qφ(zi|x) ‖ p(zi) ) (6)
Fig. 1 shows the effect of λ on unit activity and generation, with λ = 1 being the correct objective to optimize. While tuning down λ increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units only, for λ = 1. The model spends its capacity in ensuring that reconstruction of the training set is optimized (reconstruction visualizations are shown in § 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for λ (Bowman et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the latent units (Kingma et al., 2016).
In this paper, we present a model based approach called “epitomic variational autoencoder” to address the problem of over pruning.
3 MODEL
We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensional subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick-
ness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions of D are needed to capture the variability in strokes of some digits (see Fig. 3).
Epitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous dimensions of D are active2.
The generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N (z; 0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N -dimensional observation x is then drawn from a Gaussian distribution:
pθ(x|y, z) = N (x; f1(my z), exp(f2(my z))) (7) my enforces the epitome constraint: it is also aD-dimensional vector that is zero everywhere except in the active dimensions of the epitome. is element-wise multiplication between the two operands. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our model generalizes the VAE and collapses to a VAE when D = K = s.
f1( ) and f2( ) define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but instead deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the
2The model also allows for incorporating other forms of structured sparsity. 3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.
sparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones, which manifests itself in the representation space; an intrinsic ordering in the variability is learned.
3.1 OVERCOMING OVER-PRUNING
Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate posterior inference, with the functional form
q(z, y|x) = q(y|x)q(z|y,x) (8) = q(y|x)N (z;my µ, exp (my φ)) (9)
where µ = h1(x) and φ = h2(x) are neural networks that map x to D dimensional space.
We use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z|y,x). As in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cost function (negative bound) is
Cevae = − T∑ t=1 Eq(z,y|x(t))[log p(x (t)|y, z)]
− T∑ t=1 KL [ qφ(y|x(t)) ‖ pθ(y) ] − T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (10)
The epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:
T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (11)
= T∑ t=1 ∑ y qφ(y|x(t))KL [ N (z;my µ(t), exp (my φ(t))) ‖ N (z;0, I) ] (12)
= T∑ t=1 ∑ y qφ(y|x(t)) D∑ d=1 1[md,y = 1]KL [ N (zd;µ(t)d , exp(φ (t) d )) ‖ N (0, 1) ] (13)
where 1[?] is an indicator variable that evaluates to 1 if only if its operand ? is true.
For a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as the prior. By design, this ensures that only the K dimensions that explain x(t) contribute to Cevae.
This is quite in contrast to how VAE optimizes Cvae (§. 2.1). For Cvae to have a small contribution from the KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction of examples in the training set contributes a possible non-zero value to zd’s KL term in Cevae. This added flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different epitomes, leading to a more balanced use of z.
In Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAE, our model is able to better use the model capacity. In the same figure, we also compare with adding dropout to the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along with the explanation in § 4.1 where we compare generation results for all three models.
3.2 TRAINING
The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10.
For the stochastic continuous variable z, we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary variables.
For the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x) by a point estimate y∗ so that q(y|x) = δ(y = y∗), where δ evaluates to 1 only if y = y∗ and the best y∗ = argmin Cevae. We also explored modeling q(y|x) = Cat(h(x)) as a discrete distribution with h being a neural network. In this case, the backward pass requires either using REINFORCE or passing through gradients for the categorical sampler. In our experiments, we found that these approaches did not work well, especially when the number of possible values of y becomes large. We leave this as future work to explore.
The recognition network first computes µ and φ. It is then combined with the optimal y∗ for each example, to arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropagation with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment.
Algorithm 1 Learning Epitomic VAE 1: θ, φ←Initialize parameters 2: for until convergence of parameters (θ, φ) do 3: Assign each x to its best y∗ = argmin Cevae 4: Randomize and then partition data into minibatches with each minibatch having proportion-
ate number of examples ∀ y 5: for k ∈ numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for
4 EXPERIMENTS
We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database (TFD) (Susskind et al., 2010). We show generation results that illustrate eVAE’s ability to better utilize model capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Finally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize that in all experiments, we keep the weight of the KL term λ = 1 to evaluate performance under optimizing the true derived lower bound, without introducing an additional hyperparameter to tune.
We use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fullyconnected networks, and we show results for different depths and number of units of per layer. ReLU nonlinearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs (MNIST) and 250 epochs (TFD), with base learning rate 0.001.
4.1 OVERCOMING OVER-PRUNING.
We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to model greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for different dimensionsD of latent variable z. WithD = 2, VAE generates realistic digits but suffers from lack of diversity. When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality. As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due to VAE’s propensity to use only a portion of its latent units for modeling the training data and the rest to minimize the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model the space of possible generations. The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.
Adding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout, the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased
while maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast, the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased.
4.2 CHOICE OF EPITOME SIZE
We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the generative models quantitatively through their samples by measuring the log-density with a Parzen window estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are nonoverlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed so that the total number of parameters is comparable to eVAE.
As we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, forD values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides more comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.
eVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of parameter sharing in eVAE is that each epitome can also benefit from general features learned across the training set.
4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER
Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer.
Table 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping.
We observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple
layers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned.
In contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the number of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling in generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.
Table 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage on smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.
H = 500 H = 1000 L = 1 L = 2 L = 3 L = 1 L = 2 L = 3
MNIST
D = 8 VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8)
D = 24 VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24)
D = 48 VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48)
TFD
4.4 COMPARISON WITH OTHER MODELS
In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density. VAE−, mVAE−, and eVAE− refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. Encoders and decoders have L = 2 layers of H = 1000 deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L,H,D) = (3, 500, 8), mVAE is (3, 1000, 24), and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500, 15), mVAE is (3, 1000, 50), and eVAE is (3, 500, 25).
We observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models, notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7.
5 RELATED WORK
A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generative model for images is proposed in which the generator of the VAE is an attention-based recurrent model that is conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative model that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that is updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants of VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xue et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopted strategies that takes away the clean mathematical formulation of VAE. We have discussed these in § 2.1.
A complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.
Methods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende & Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over projections of stochastic latent variables. However, the problem of over pruning still persists: for instance, Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used.
Related is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data.
6 CONCLUSION
This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of model over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on the intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAE models the latent space as multiple shared subspaces that have learned specializations. We show how this model addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative analysis of how eVAE enables increased utilization of the model capacity to model greater data variability. We believe that modeling the latent space as multiple structured subspaces is a promising direction of work, and allows for increased effective capacity that has potential to be combined with methods for increasing the flexibility of posterior inference.
7 ACKNOWLEDGMENTS
We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc’Aurelio Ranzato, Joost van Amersfoort and Ross Girshick. We also borrowed the term ‘epitome’ from an earlier work of Jojic et al. (2003).
8 APPENDIX
8.1 EFFECT OF KL WEIGHT λ ON RECONSTRUCTION
We visualize VAE reconstructions as the KL term weight λ is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.
8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION
In § 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAE models. Here we show the effect of the same factor on reconstruction quality for the models. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimension of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), but generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAE is able to achieve both good reconstruction and generation.
8.3 EVALUATION METRIC FOR GENERATION
There have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.
Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimensionD of latent variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decreases. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases, counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample quality also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decreases and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples for D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerate or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not the best-performing models reported in our experiments.) Since our work is motivated by the generation task, we therefore use log-density as the evaluation metric in our experiments.
Intuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 10). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand, eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight λ in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper. | 1. What is the main contribution of the paper in handling over-sampling in VAEs?
2. What are some potential limitations or alternative approaches to the proposed solution?
3. How does the reviewer assess the significance and novelty of the paper's content?
4. What are the strengths and weaknesses of the paper regarding its elegance and ability to address an important problem?
5. Are there any suggestions for improving the paper's clarity or terminology usage? | Review | Review
This paper is refreshing and elegant in its handling of "over-sampling" in VAE. Problem is that good reconstruction requires more nodes in the latent layers of the VAE. Not all of them can or should be sampled from at the "creative" regime of the VAE. Which ones to choose? The paper offers and sensible solution. Problem is that real-life data-sets like CIFAR have not being tried, so the reader is hard-pressed to choose between many other, just as natural, solutions. One can e.g. run in parallel a classifier and let it choose the best epitome, in the spirit of spatial transformers, ACE, reference [1]. The list can go on. We hope that the paper finds its way to the conference because it addresses an important problem in an elegant way, and papers like this are few and far between!
On a secondary note, regarding terminology: Pls avoid using "the KL term" as in section 2.1, there are so many "KL terms" related to VAE-s, it ultimately gets out of control. "Generative error" is a more descriptive term, because minimizing it is indispensable for the generative qualities of the net. The variational error for example is also a "KL term" (equation (3.4) in reference [1]), as is the upper bound commonly used in VAE-s (your formula (5) and its equivalent - the KL expression as in formula (3.8) in reference [1]). The latter expression is frequently used and is handy for, say, importance sampling, as in reference [2].
[1] https://arxiv.org/pdf/1508.06585v5.pdf
[2] https://arxiv.org/pdf/1509.00519.pdf |
ICLR | Title
Epitomic Variational Autoencoders
Abstract
In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
1 INTRODUCTION
Unsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an approach to unsupervised learning wherein an explicit stochastic generative model of data is defined, such that independent draws from this model are likely to produce the original data distribution, while the learned latent structure itself is useful in prediction, classification and visualization tasks.
The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).
A commonly known problem with the VAE lower bound is that it is known to self-prune or under utilize the model’s capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization techniques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed discussion is provided in § 2.1. In this paper, we take a model-based approach to directly address this problem. We present an extension of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE, for short) that automatically learns to utilize its model capacity more effectively, leading to better generalization. Consider the task of learning a D-dimensional representation for the examples in a given dataset. The motivation for our model stems from the hypothesis that a single example in the dataset can be sufficiently embedded in a smaller K-dimensional (K D) subspace of D. However, different data points may need different subspaces, hence the need for D. Sparse coding methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional categorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable activates only a contiguous subset of latent stochastic variables to generate an observation. This ∗Work done during an internship at Facebook AI Research.
enables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epitomic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.
The rest of the paper is organized as follows. We first describe variational autoencoders and mathematically show the model pruning effect in § 2. We then present our epitomic VAE model in § 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in § 4. We finally provide more general context of our work in the related work in § 5, and conclude with discussions.
2 VARIATIONAL AUTOENCODERS
The generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian
p(z) = N (z; 0; I) (1) and then generating the N-dimensional observation x from a parametric family of distributions such as a Gaussian pθ(x|z) = N (x; f1(z); exp(f2(z))) (2) where f1 and f2 define non-linear deterministic transformations of z modeled using a neural network. The parameters θ of the model are the weights and biases of the neural network that encodes the functions f1 and f2.
Given a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood of the parameters to have generated the data, p(X|θ). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the zi when conditioned on x.
Variational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parameters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters φ that outputs the posterior distribution of the form qφ(z|x) = ∏ d q(zi|x). This results in the lower bound given by
log pθ(X) = T∑ t=1 log ∫ z pθ(x (t), z) (3)
≥ T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)]−KL ( qφ(z|x(t)) ‖ p(z) ) (4)
VAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound
Cvae = − T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)] + T∑ t=1 D∑ i=1 KL ( qφ(zi|x(t)) ‖ p(zi) ) (5)
2.1 AUTOMATIC MODEL OVER-PRUNING IN VAE
Cvae introduces a trade-off between data reconstruction (first term) and satisfying the independence assumption of p(z) (second term, KL).
Of particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In particular, the model needs to only ensure that the overall KL term is minimized, on average, and not per component wise. The easiest way for the model to do this is to have a large number of components that satisfies the KL term effectively, by turning off the units so that the posterior for those units becomes the same as the prior1. This effect is quite pronounced in the early iterations of
1Since log variance is modeled using the neural network, turning it off will lead to a variance of 1.
training: the model for log p(x|z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almost impossible for them to resurrect, and hence the full capacity of the model is not utilized.
A quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or “active”, if Au = Covx(Eu∼q(u|x)[u]) > 0.02.
A commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter λ so that the cost is
C = −Eqφ(z|x)[log p(x|z)] + λ D∑ i=1 KL ( qφ(zi|x) ‖ p(zi) ) (6)
Fig. 1 shows the effect of λ on unit activity and generation, with λ = 1 being the correct objective to optimize. While tuning down λ increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units only, for λ = 1. The model spends its capacity in ensuring that reconstruction of the training set is optimized (reconstruction visualizations are shown in § 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for λ (Bowman et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the latent units (Kingma et al., 2016).
In this paper, we present a model based approach called “epitomic variational autoencoder” to address the problem of over pruning.
3 MODEL
We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensional subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick-
ness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions of D are needed to capture the variability in strokes of some digits (see Fig. 3).
Epitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous dimensions of D are active2.
The generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N (z; 0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N -dimensional observation x is then drawn from a Gaussian distribution:
pθ(x|y, z) = N (x; f1(my z), exp(f2(my z))) (7) my enforces the epitome constraint: it is also aD-dimensional vector that is zero everywhere except in the active dimensions of the epitome. is element-wise multiplication between the two operands. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our model generalizes the VAE and collapses to a VAE when D = K = s.
f1( ) and f2( ) define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but instead deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the
2The model also allows for incorporating other forms of structured sparsity. 3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.
sparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones, which manifests itself in the representation space; an intrinsic ordering in the variability is learned.
3.1 OVERCOMING OVER-PRUNING
Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate posterior inference, with the functional form
q(z, y|x) = q(y|x)q(z|y,x) (8) = q(y|x)N (z;my µ, exp (my φ)) (9)
where µ = h1(x) and φ = h2(x) are neural networks that map x to D dimensional space.
We use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z|y,x). As in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cost function (negative bound) is
Cevae = − T∑ t=1 Eq(z,y|x(t))[log p(x (t)|y, z)]
− T∑ t=1 KL [ qφ(y|x(t)) ‖ pθ(y) ] − T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (10)
The epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:
T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (11)
= T∑ t=1 ∑ y qφ(y|x(t))KL [ N (z;my µ(t), exp (my φ(t))) ‖ N (z;0, I) ] (12)
= T∑ t=1 ∑ y qφ(y|x(t)) D∑ d=1 1[md,y = 1]KL [ N (zd;µ(t)d , exp(φ (t) d )) ‖ N (0, 1) ] (13)
where 1[?] is an indicator variable that evaluates to 1 if only if its operand ? is true.
For a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as the prior. By design, this ensures that only the K dimensions that explain x(t) contribute to Cevae.
This is quite in contrast to how VAE optimizes Cvae (§. 2.1). For Cvae to have a small contribution from the KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction of examples in the training set contributes a possible non-zero value to zd’s KL term in Cevae. This added flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different epitomes, leading to a more balanced use of z.
In Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAE, our model is able to better use the model capacity. In the same figure, we also compare with adding dropout to the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along with the explanation in § 4.1 where we compare generation results for all three models.
3.2 TRAINING
The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10.
For the stochastic continuous variable z, we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary variables.
For the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x) by a point estimate y∗ so that q(y|x) = δ(y = y∗), where δ evaluates to 1 only if y = y∗ and the best y∗ = argmin Cevae. We also explored modeling q(y|x) = Cat(h(x)) as a discrete distribution with h being a neural network. In this case, the backward pass requires either using REINFORCE or passing through gradients for the categorical sampler. In our experiments, we found that these approaches did not work well, especially when the number of possible values of y becomes large. We leave this as future work to explore.
The recognition network first computes µ and φ. It is then combined with the optimal y∗ for each example, to arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropagation with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment.
Algorithm 1 Learning Epitomic VAE 1: θ, φ←Initialize parameters 2: for until convergence of parameters (θ, φ) do 3: Assign each x to its best y∗ = argmin Cevae 4: Randomize and then partition data into minibatches with each minibatch having proportion-
ate number of examples ∀ y 5: for k ∈ numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for
4 EXPERIMENTS
We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database (TFD) (Susskind et al., 2010). We show generation results that illustrate eVAE’s ability to better utilize model capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Finally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize that in all experiments, we keep the weight of the KL term λ = 1 to evaluate performance under optimizing the true derived lower bound, without introducing an additional hyperparameter to tune.
We use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fullyconnected networks, and we show results for different depths and number of units of per layer. ReLU nonlinearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs (MNIST) and 250 epochs (TFD), with base learning rate 0.001.
4.1 OVERCOMING OVER-PRUNING.
We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to model greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for different dimensionsD of latent variable z. WithD = 2, VAE generates realistic digits but suffers from lack of diversity. When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality. As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due to VAE’s propensity to use only a portion of its latent units for modeling the training data and the rest to minimize the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model the space of possible generations. The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.
Adding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout, the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased
while maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast, the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased.
4.2 CHOICE OF EPITOME SIZE
We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the generative models quantitatively through their samples by measuring the log-density with a Parzen window estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are nonoverlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed so that the total number of parameters is comparable to eVAE.
As we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, forD values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides more comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.
eVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of parameter sharing in eVAE is that each epitome can also benefit from general features learned across the training set.
4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER
Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer.
Table 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping.
We observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple
layers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned.
In contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the number of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling in generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.
Table 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage on smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.
H = 500 H = 1000 L = 1 L = 2 L = 3 L = 1 L = 2 L = 3
MNIST
D = 8 VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8)
D = 24 VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24)
D = 48 VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48)
TFD
4.4 COMPARISON WITH OTHER MODELS
In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density. VAE−, mVAE−, and eVAE− refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. Encoders and decoders have L = 2 layers of H = 1000 deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L,H,D) = (3, 500, 8), mVAE is (3, 1000, 24), and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500, 15), mVAE is (3, 1000, 50), and eVAE is (3, 500, 25).
We observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models, notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7.
5 RELATED WORK
A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generative model for images is proposed in which the generator of the VAE is an attention-based recurrent model that is conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative model that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that is updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants of VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xue et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopted strategies that takes away the clean mathematical formulation of VAE. We have discussed these in § 2.1.
A complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.
Methods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende & Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over projections of stochastic latent variables. However, the problem of over pruning still persists: for instance, Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used.
Related is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data.
6 CONCLUSION
This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of model over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on the intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAE models the latent space as multiple shared subspaces that have learned specializations. We show how this model addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative analysis of how eVAE enables increased utilization of the model capacity to model greater data variability. We believe that modeling the latent space as multiple structured subspaces is a promising direction of work, and allows for increased effective capacity that has potential to be combined with methods for increasing the flexibility of posterior inference.
7 ACKNOWLEDGMENTS
We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc’Aurelio Ranzato, Joost van Amersfoort and Ross Girshick. We also borrowed the term ‘epitome’ from an earlier work of Jojic et al. (2003).
8 APPENDIX
8.1 EFFECT OF KL WEIGHT λ ON RECONSTRUCTION
We visualize VAE reconstructions as the KL term weight λ is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.
8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION
In § 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAE models. Here we show the effect of the same factor on reconstruction quality for the models. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimension of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), but generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAE is able to achieve both good reconstruction and generation.
8.3 EVALUATION METRIC FOR GENERATION
There have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.
Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimensionD of latent variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decreases. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases, counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample quality also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decreases and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples for D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerate or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not the best-performing models reported in our experiments.) Since our work is motivated by the generation task, we therefore use log-density as the evaluation metric in our experiments.
Intuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 10). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand, eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight λ in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper. | 1. What is the main contribution of the paper, and how does it differ from previous research in VAEs?
2. How well-motivated is the choice of prior proposed in the paper, and what are the potential limitations of this choice?
3. Are there any issues with the experimental results reported in the paper, and how do they compare to competing algorithms?
4. How does the paper address concerns about overfitting and data reconstruction in VAEs?
5. Can the authors provide more detail on their use of a group sparse prior and how it affects the generative model and inference process?
6. How does the paper evaluate the performance of the proposed model, and are there any limitations or drawbacks to this approach?
7. What are some potential future directions for improving the proposed model and addressing remaining challenges in VAE research? | Review | Review
This paper replaces the Gaussian prior often used in a VAE with a group sparse prior. They modify the approximate posterior function so that it also generates group sparse samples. The development of novel forms for the generative model and inference process in VAEs is an active and important area of research. I don't believe the specific choice of prior proposed in this paper is very well motivated however. I believe several of the conceptual claims are incorrect. The experimental results are unconvincing, and I suspect compare log likelihoods in bits against competing algorithms in nats.
Some more detailed comments:
In Table 1, the log likelihoods reported for competing techniques are all in nats. The reported log likelihood of cVAE using 10K samples is not only higher than the likelihood of true data samples, but is also higher than the log likelihood that can be achieved by fitting a 10K k-means mixture model to the data (eg as done in "A note on the evaluation of generative models"). It should nearly impossible to outperform a 10K k-means mixture on Parzen estimation, which makes me extremely skeptical of these eVAE results. However, if you assume that the eVAE log likelihood is actually in bits, and multiply it by log 2 to convert to nats, then it corresponds to a totally believable log likelihood. Note that some Parzen window implementations report log likelihood in bits. Is this experiment comparing log likelihood in bits to competing log likelihoods in nats? (also, label units -- eg bits or nats -- in table)
It would be really, really, good to report and compare the variational lower bound on the log likelihood!! Alternatively, if you are concerned your bound is loose, you can use AIS to get a more exact measure of the log likelihood. Even if the Parzen window results are correct, Parzen estimates of log likelihood are extremely poor. They possess any drawback of log likelihood evaluation (which they approximate), and then have many additional drawbacks as well.
The MNIST sample quality does not appear to be visually competitive. Also -- it appears that the images are of the probability of activation for each pixel, rather than actual samples from the model. Samples would be more accurate, but either way make sure to describe what is shown in the figure.
There are no experiments on non-toy datasets.
I am still concerned about most of the issues I raised in my questions below. Briefly, some comments on the authors' response:
1. "minibatches are constructed to not only have a random subset of training examples but also be balanced w.r.t. to epitome assignment (Alg. 1, ln. 4)."
Nice! This makes me feel better about why all the epitomes will be used.
2. I don't think your response addresses why C_vae would trade off between data reconstruction and being factorial. The approximate posterior is factorial by construction -- there's nothing in C_vae that can make it more or less factorial.
3. "For C_vae to have zero contribution from the KL term of a particular z_d (in other words, that unit is deactivated), it has to have all the examples in the training set be deactivated (KL term of zero) for that unit"
This isn't true. A standard VAE can set the variance to 1 and the mean to 0 (KL term of 0) for some examples in the training set, and have non-zero KL for other training examples.
4. The VAE loss is trained on a lower bound on the log likelihood, though it does have a term that looks like reconstruction error. Naively, I would imagine that if it overfits, this would correspond to data samples becoming more likely under the generative model.
5/6. See Parzen concerns above. It's strange to train a binary model, and then treat it's probability of activation as a sample in a continuous space.
6. "we can only evaluate the model from its samples"
I don't believe this is true. You are training on a lower bound on the log likelihood, which immediately provides another method of quantitative evaluation. Additionally, you could use techniques such as AIS to compute the exact log likelihood.
7. I don't believe Parzen window evaluation is a better measure of model quality, even in terms of sample generation, than log likelihood. |
ICLR | Title
Epitomic Variational Autoencoders
Abstract
In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.
1 INTRODUCTION
Unsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an approach to unsupervised learning wherein an explicit stochastic generative model of data is defined, such that independent draws from this model are likely to produce the original data distribution, while the learned latent structure itself is useful in prediction, classification and visualization tasks.
The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).
A commonly known problem with the VAE lower bound is that it is known to self-prune or under utilize the model’s capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization techniques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed discussion is provided in § 2.1. In this paper, we take a model-based approach to directly address this problem. We present an extension of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE, for short) that automatically learns to utilize its model capacity more effectively, leading to better generalization. Consider the task of learning a D-dimensional representation for the examples in a given dataset. The motivation for our model stems from the hypothesis that a single example in the dataset can be sufficiently embedded in a smaller K-dimensional (K D) subspace of D. However, different data points may need different subspaces, hence the need for D. Sparse coding methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional categorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable activates only a contiguous subset of latent stochastic variables to generate an observation. This ∗Work done during an internship at Facebook AI Research.
enables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epitomic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.
The rest of the paper is organized as follows. We first describe variational autoencoders and mathematically show the model pruning effect in § 2. We then present our epitomic VAE model in § 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in § 4. We finally provide more general context of our work in the related work in § 5, and conclude with discussions.
2 VARIATIONAL AUTOENCODERS
The generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian
p(z) = N (z; 0; I) (1) and then generating the N-dimensional observation x from a parametric family of distributions such as a Gaussian pθ(x|z) = N (x; f1(z); exp(f2(z))) (2) where f1 and f2 define non-linear deterministic transformations of z modeled using a neural network. The parameters θ of the model are the weights and biases of the neural network that encodes the functions f1 and f2.
Given a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood of the parameters to have generated the data, p(X|θ). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the zi when conditioned on x.
Variational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parameters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters φ that outputs the posterior distribution of the form qφ(z|x) = ∏ d q(zi|x). This results in the lower bound given by
log pθ(X) = T∑ t=1 log ∫ z pθ(x (t), z) (3)
≥ T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)]−KL ( qφ(z|x(t)) ‖ p(z) ) (4)
VAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound
Cvae = − T∑ t=1 Eqφ(z|x(t))[log p(x (t)|z)] + T∑ t=1 D∑ i=1 KL ( qφ(zi|x(t)) ‖ p(zi) ) (5)
2.1 AUTOMATIC MODEL OVER-PRUNING IN VAE
Cvae introduces a trade-off between data reconstruction (first term) and satisfying the independence assumption of p(z) (second term, KL).
Of particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In particular, the model needs to only ensure that the overall KL term is minimized, on average, and not per component wise. The easiest way for the model to do this is to have a large number of components that satisfies the KL term effectively, by turning off the units so that the posterior for those units becomes the same as the prior1. This effect is quite pronounced in the early iterations of
1Since log variance is modeled using the neural network, turning it off will lead to a variance of 1.
training: the model for log p(x|z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almost impossible for them to resurrect, and hence the full capacity of the model is not utilized.
A quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or “active”, if Au = Covx(Eu∼q(u|x)[u]) > 0.02.
A commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter λ so that the cost is
C = −Eqφ(z|x)[log p(x|z)] + λ D∑ i=1 KL ( qφ(zi|x) ‖ p(zi) ) (6)
Fig. 1 shows the effect of λ on unit activity and generation, with λ = 1 being the correct objective to optimize. While tuning down λ increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units only, for λ = 1. The model spends its capacity in ensuring that reconstruction of the training set is optimized (reconstruction visualizations are shown in § 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for λ (Bowman et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the latent units (Kingma et al., 2016).
In this paper, we present a model based approach called “epitomic variational autoencoder” to address the problem of over pruning.
3 MODEL
We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensional subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick-
ness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions of D are needed to capture the variability in strokes of some digits (see Fig. 3).
Epitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous dimensions of D are active2.
The generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N (z; 0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N -dimensional observation x is then drawn from a Gaussian distribution:
pθ(x|y, z) = N (x; f1(my z), exp(f2(my z))) (7) my enforces the epitome constraint: it is also aD-dimensional vector that is zero everywhere except in the active dimensions of the epitome. is element-wise multiplication between the two operands. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our model generalizes the VAE and collapses to a VAE when D = K = s.
f1( ) and f2( ) define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but instead deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the
2The model also allows for incorporating other forms of structured sparsity. 3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.
sparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones, which manifests itself in the representation space; an intrinsic ordering in the variability is learned.
3.1 OVERCOMING OVER-PRUNING
Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate posterior inference, with the functional form
q(z, y|x) = q(y|x)q(z|y,x) (8) = q(y|x)N (z;my µ, exp (my φ)) (9)
where µ = h1(x) and φ = h2(x) are neural networks that map x to D dimensional space.
We use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z|y,x). As in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cost function (negative bound) is
Cevae = − T∑ t=1 Eq(z,y|x(t))[log p(x (t)|y, z)]
− T∑ t=1 KL [ qφ(y|x(t)) ‖ pθ(y) ] − T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (10)
The epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:
T∑ t=1 ∑ y qφ(y|x(t))KL [ qφ(z|y,x(t)) ‖ pθ(z) ] (11)
= T∑ t=1 ∑ y qφ(y|x(t))KL [ N (z;my µ(t), exp (my φ(t))) ‖ N (z;0, I) ] (12)
= T∑ t=1 ∑ y qφ(y|x(t)) D∑ d=1 1[md,y = 1]KL [ N (zd;µ(t)d , exp(φ (t) d )) ‖ N (0, 1) ] (13)
where 1[?] is an indicator variable that evaluates to 1 if only if its operand ? is true.
For a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as the prior. By design, this ensures that only the K dimensions that explain x(t) contribute to Cevae.
This is quite in contrast to how VAE optimizes Cvae (§. 2.1). For Cvae to have a small contribution from the KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction of examples in the training set contributes a possible non-zero value to zd’s KL term in Cevae. This added flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different epitomes, leading to a more balanced use of z.
In Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAE, our model is able to better use the model capacity. In the same figure, we also compare with adding dropout to the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along with the explanation in § 4.1 where we compare generation results for all three models.
3.2 TRAINING
The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10.
For the stochastic continuous variable z, we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary variables.
For the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x) by a point estimate y∗ so that q(y|x) = δ(y = y∗), where δ evaluates to 1 only if y = y∗ and the best y∗ = argmin Cevae. We also explored modeling q(y|x) = Cat(h(x)) as a discrete distribution with h being a neural network. In this case, the backward pass requires either using REINFORCE or passing through gradients for the categorical sampler. In our experiments, we found that these approaches did not work well, especially when the number of possible values of y becomes large. We leave this as future work to explore.
The recognition network first computes µ and φ. It is then combined with the optimal y∗ for each example, to arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropagation with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment.
Algorithm 1 Learning Epitomic VAE 1: θ, φ←Initialize parameters 2: for until convergence of parameters (θ, φ) do 3: Assign each x to its best y∗ = argmin Cevae 4: Randomize and then partition data into minibatches with each minibatch having proportion-
ate number of examples ∀ y 5: for k ∈ numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for
4 EXPERIMENTS
We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database (TFD) (Susskind et al., 2010). We show generation results that illustrate eVAE’s ability to better utilize model capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Finally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize that in all experiments, we keep the weight of the KL term λ = 1 to evaluate performance under optimizing the true derived lower bound, without introducing an additional hyperparameter to tune.
We use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fullyconnected networks, and we show results for different depths and number of units of per layer. ReLU nonlinearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs (MNIST) and 250 epochs (TFD), with base learning rate 0.001.
4.1 OVERCOMING OVER-PRUNING.
We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to model greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for different dimensionsD of latent variable z. WithD = 2, VAE generates realistic digits but suffers from lack of diversity. When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality. As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due to VAE’s propensity to use only a portion of its latent units for modeling the training data and the rest to minimize the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model the space of possible generations. The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.
Adding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout, the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased
while maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast, the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased.
4.2 CHOICE OF EPITOME SIZE
We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the generative models quantitatively through their samples by measuring the log-density with a Parzen window estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are nonoverlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed so that the total number of parameters is comparable to eVAE.
As we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, forD values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides more comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.
eVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of parameter sharing in eVAE is that each epitome can also benefit from general features learned across the training set.
4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER
Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer.
Table 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping.
We observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple
layers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned.
In contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the number of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling in generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.
Table 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage on smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.
H = 500 H = 1000 L = 1 L = 2 L = 3 L = 1 L = 2 L = 3
MNIST
D = 8 VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8)
D = 24 VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24)
D = 48 VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48)
TFD
4.4 COMPARISON WITH OTHER MODELS
In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density. VAE−, mVAE−, and eVAE− refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. Encoders and decoders have L = 2 layers of H = 1000 deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L,H,D) = (3, 500, 8), mVAE is (3, 1000, 24), and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500, 15), mVAE is (3, 1000, 50), and eVAE is (3, 500, 25).
We observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models, notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7.
5 RELATED WORK
A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generative model for images is proposed in which the generator of the VAE is an attention-based recurrent model that is conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative model that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that is updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants of VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xue et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopted strategies that takes away the clean mathematical formulation of VAE. We have discussed these in § 2.1.
A complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.
Methods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende & Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over projections of stochastic latent variables. However, the problem of over pruning still persists: for instance, Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used.
Related is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data.
6 CONCLUSION
This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of model over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on the intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAE models the latent space as multiple shared subspaces that have learned specializations. We show how this model addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative analysis of how eVAE enables increased utilization of the model capacity to model greater data variability. We believe that modeling the latent space as multiple structured subspaces is a promising direction of work, and allows for increased effective capacity that has potential to be combined with methods for increasing the flexibility of posterior inference.
7 ACKNOWLEDGMENTS
We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc’Aurelio Ranzato, Joost van Amersfoort and Ross Girshick. We also borrowed the term ‘epitome’ from an earlier work of Jojic et al. (2003).
8 APPENDIX
8.1 EFFECT OF KL WEIGHT λ ON RECONSTRUCTION
We visualize VAE reconstructions as the KL term weight λ is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.
8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION
In § 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAE models. Here we show the effect of the same factor on reconstruction quality for the models. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimension of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), but generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAE is able to achieve both good reconstruction and generation.
8.3 EVALUATION METRIC FOR GENERATION
There have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.
Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimensionD of latent variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decreases. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases, counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample quality also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decreases and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples for D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerate or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not the best-performing models reported in our experiments.) Since our work is motivated by the generation task, we therefore use log-density as the evaluation metric in our experiments.
Intuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 10). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand, eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight λ in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper. | 1. What is the main contribution of the paper in addressing the problem of VAEs over-regularizing themselves?
2. What is the proposed solution to this problem, and how does it differ from previous approaches such as annealing and "free bits"?
3. Why does the author suggest placing a prior over arbitrary subsets of latents instead of using the proposed topology in the latent representation?
4. What is the purpose of the first paragraph on page 7, and what point is the author trying to make regarding under-utilization of model capacity leading to overfitting?
5. Are the experiments sufficient to support the claims made by the paper, or do they fall short in some way? | Review | Review
This paper proposes an elegant solution to a very important problem in VAEs, namely that the model over-regularizes itself by killing off latent dimensions. People have used annealing of the KL term and “free bits” to hack around this issue but a better solution is needed.
The offered solution is to introduce sparsity for the latent representation: for every input only a few latent distributions will be activated but across the dataset many latents can still be learned.
What I didn’t understand is why the authors need the topology in this latent representation. Why not place a prior over arbitrary subsets of latents? That seems to increase the representational power a lot without compromising the solution to the problem you are trying to solve. Now the number of ways the latents can combine is no longer exponentially large, which seems a pity.
The first paragraph on p.7 is a mystery to me: “An effect of this …samples”. How can under-utilization of model capacity lead to overfitting?
The experiments are modest but sufficient.
This paper has an interesting idea that may resolve a fundamental issue of VAEs and thus deserves a place in this conference. |
ICLR | Title
Offline Policy Optimization with Variance Regularization
Abstract
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing algorithms.
1 INTRODUCTION
Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics (Levine et al., 2016) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets, similar to supervised learning, avoiding continual interaction with the environment, which could be problematic for safety and feasibility reasons. However, significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates, a problem encountered by most off-policy RL algorithms (Precup et al., 2000). A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch, leading the agent in data regions where its behavior is poor Fujimoto et al. (2019). Recently there has been some progress in offline RL (Kumar et al., 2019; Wu et al., 2019b; Fujimoto et al., 2019), trying to tackle both of these problems.
In this work, we study the problem of offline policy optimization with variance minimization. To avoid overly optimistic value function estimates, we propose to learn value functions under variance constraints, leading to a pessimistic estimation, which can significantly help offline RL algorithms, especially under large distribution mismatch. We propose a framework for variance minimization in offline RL, such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions.
We develop a novel approach for variance regularized offline actor-critic algorithms, which we call Offline Variance Regularizer (OVR). The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates. Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates, by instead considering the variance of stationary distribution corrections with per-step rewards, and using the Fenchel transformation (Boyd & Vandenberghe, 2004) to formulate a minimax optimization objective. This allows minimizing variance constraints by instead optimizing dual variables, resulting in simply an augmented reward objective for variance regularized value functions.
We show that even with variance constraints, we can ensure policy improvement guarantees, where the regularized value function leads to a lower bound on the true value function, which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling, which has been a major bottleneck in scaling up variance-constrained
actor-critic algorithms in prior work A. & Ghavamzadeh (2016); A. & Fu (2018). Practically, our algorithm is easy to implement, since it simply involves augmenting the rewards with the dual variables only, such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms. We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains. Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random, or when it is very different from the data distributions encountered during training.
2 PRELIMINARIES AND BACKGROUND
We consider an infinite horizon MDP as (S,A,P, γ) where S is the set of states, A is the set of actions, P is the transition dynamics and γ is the discount factor. The goal of reinforcement learning is to maximize the expected return J (π) = Es∼dβ [V π(s)], where V π(s) is the value function V π(s) = E[ ∑∞ t=0 γ
tr(st, at) | s0 = s], and β is the initial state distribution. Considering parameterized policies πθ(a|s), the goal is maximize the returns by following the policy gradient (Sutton et al., 1999), based on the performance metric defined as :
J(πθ) = Es0∼ρ,a0∼π(s0) [ Qπθ (s0, a0) ] = E(s,a)∼dπθ (s,a) [ r(s, a) ] (1)
where Qπ(s, a) is the state-action value function, since V π(s) = ∑ a π(a|s)Qπ(s, a). The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ, where dπ(s, a) is the state-action occupancy measure, such that the normalized state-action visitation distribution under policy π is defined as : dπ(s, a) = (1 − γ) ∑∞ t=0 γ
tP (st = s, at = a|s0 ∼ β, a ∼ π(s0)). The equality in equation 1 holds and can be equivalently written based on the linear programming (LP) formulation in RL (see (Puterman, 1994; Nachum & Dai, 2020) for more details). In this work, we consider the off-policy learning problem under a fixed dataset D which contains s, a, r, s′ tuples under a known behaviour policy µ(a|s). Under the off-policy setting, importance sampling (Precup et al., 2000) is often used to reweight the trajectory under the behaviour data collecting policy, such as to get unbiased estimates of the expected returns. At each time step, the importance sampling correction π(at|st)µ(at|st) is used to compute the expected return under the entire trajectory as
J(π) = (1 − γ)E(s,a)∼dµ(s,a)[ ∑T t=0 γ tr(st, at) (∏T t=1 π(at|st) µ(at|st ) ]. Recent works (Fujimoto et al., 2019) have demonstrated that instead of importance sampling corrections, maximizing value functions directly for deterministic or reparameterized policy gradients (Lillicrap et al., 2016; Fujimoto et al., 2018) allows learning under fixed datasets, by addressing the over-estimation problem, by maximizing the objectives of the form maxθ Es∼D [ Qπθ (s, πθ(s) ] .
3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION
In this section, we first present our approach based on variance of stationary distribution corrections, compared to importance re-weighting of episodic returns in section 3.1. We then present a derivation of our approach based on Fenchel duality on the variance, to avoid the double sampling issue, leading to a variance regularized offline optimization objective in section 3.2. Finally, we present our algorithm in 1, where the proposed regularizer can be used in any existing offline RL algorithm.
3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS
In this work, we consider the variance of rewards under occupancy measures in offline policy optimization. Let us denote the returns as Dπ = ∑T t=0 γ
tr(st, at), such that the value function is V π = Eπ[Dπ]. The 1-step importance sampling ratio is ρt = π(at|st)µ(at|st) , and the T-steps ratio can be denoted ρ1:T = ∏T t=1 ρt. Considering per-decision importance sampling (PDIS) (Precup et al.,
2000), the returns can be similarly written as Dπ = ∑T t=0 γ
trtρ0:t. The variance of episodic returns, which we denote by VP(π), with off-policy importance sampling corrections can be written as : VP(π) = Es∼β,a∼µ(·|s),s′∼P(·|s,a) [( Dπ(s, a)− J(π) )2] .
Instead of importance sampling, several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2020; Uehara & Jiang, 2019), which can lead to lower variance estimators at the cost of introducing bias. Denoting the stationary distribution ratios as ω(s, a) = dπ(s,a)dµ(s,a) , the returns can be written as Wπ(s, a) = ω(s, a)r(s, a). The variance of marginalized IS is :
VD(π) = E(s,a)∼dµ(s,a) [( Wπ(s, a)− J(π) )2] = E(s,a)∼dµ(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dµ(s,a) [ Wπ(s, a) ]2 (2)
Our key contribution is to first consider the variance of marginalized IS VD(π) itself a as risk constraints, in the offline batch optimization setting. We show that constraining the offline policy optimization objective with variance of marginalized IS, and using the Fenchel-Legendre transformation on VD(π) can help avoid the well-known double sampling issue in variance risk constrained RL (for more details on how to compute the gradient of the variance term, see appendix B). We emphasize that the variance here is solely based on returns with occupancy measures, and we do not consider the variance due to the inherent stochasticity of the MDP dynamics.
3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE
We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D (which we denote ω for short for clarity) in the offline fixed dataset D setting:
max πθ
J(πθ) := Es∼D [ Qπθ (s, πθ(s)) ] − λVD(ω, πθ) (3)
where λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization (or equivalently variance risk minimization). The max-return objective under Qπθ (s, a) has been considered in prior works in offline policy optimization (Fujimoto et al., 2019; Kumar et al., 2019). We show that this form of regularizer encourages variance minimization in offline policy optimization, especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ.
3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY
At first, equation 3 seems to be difficult to optimize, especially for minimizing the variance regularization w.r.t θ. This is because finding the gradient of V(ω, πθ) would lead to the double sampling issue since it contains the squared of the expectation term. The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2, for regularizing policy optimization objective with variance of marginalized importance sampling. Applying Fenchel duality, x2 = maxy(2xy − y2), to the second term of variance expression, we can transform the variance minimization problem into an equivalent maximization problem, by introducing the dual variables ν(s, a). We have the Fenchel conjugate of the variance term as :
V(ω, πθ) = max ν
{ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + E(s,a)∼dD
[ ω(s, a)r(s, a)2 ]} = max
ν E(s,a)∼dD
[ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + ω(s, a)r(s, a)2 ] (4) Regularizing the policy optimization objective with variance under the Fenchel transformation, we therefore have the overall max-min optimization objective, explicitly written as :
max θ min ν J(πθ, ν) := Es∼D
[ Qπθ (s, πθ(s)) ] −λE(s,a)∼dD [( − 1
2 ν2+ν ·ω ·r+ω ·r2
) (s, a) ] (5)
3.4 AUGMENTED REWARD OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we explain the key steps that leads to the policy improvement step being an augmented variance regularized reward objective. The variance minimization step involves estimating the stationary distribution ration (Nachum et al., 2019a), and then simply computing the closed form solution for the dual variables. Fixing dual variables ν, to update πθ, note that this leads to a standard maximum return objective in the dual form, which can be equivalently solved in the primal form,
using augmented rewards. This is because we can write the above above in the dual form as : J(πθ, ν, ω) := E(s,a)∼dD(s,a) [ ω(s, a) · r(s, a)− λ ( − 1
2 ν2 + ν · ω · r + ω · r2
) (s, a) ] = E(s,a)∼dD(s,a) [ ω(s, a) · ( r − λ · ν · r − λ · r2 ) (s, a) + λ
2 ν(s, a)2 ] = E(s,a)∼dD(s,a) [ ω(s, a) · r̃(s, a) + λ
2 ν(s, a)2
] (6)
where we denote the augmented rewards as : r̃(s, a) ≡ [r − λ · ν · r − λ · r2](s, a) (7)
The policy improvement step can either be achieved by directly solving equation 6 or by considering the primal form of the objective with respect to Qπθ (s, πθ) as in (Fujimoto et al., 2019; Kumar et al., 2019). However, solving equation 6 directly can be troublesome, since the policy gradient step involves findinding the gradient w.r.t ω(s, a) = dπθ (s,a)dD(s,a) too, where the distribution ratio depends on dπθ (s, a). This means that the gradient w.r.t θ would require finding the gradient w.r.t to the normalized discounted occupancy measure, ie,∇θdπθ (s). Instead, it is therefore easier to consider the augmented reward objective, using r̃(s, a) as in equation 7 in any existing offline policy optimization algorithm, where we have the variance regularized value function Q̃πθ (s, a).
Note that as highlighted in (Sobel, 1982), the variance of returns follows a Bellman-like equation. Following this, (Bisi et al., 2019) also pointed to a Bellman-like solution for variance w.r.t occupancy measures. Considering variance of the form in equation 2, and the Bellman-like equation for variance, we can write the variance recursively as a Bellman equation:
VπD(s, a) = ( r(s, a)− J(π) )2 + γEs′∼P,a′∼π′(·|s′) [ VπD(s′, a′) ] (8)
Since in our objective, we augment the policy improvement step with the variance regularization term, we can write the augmented value function as Qπλ(s, a) := Q
π(s, a)− λVπD(s, a). This suggests we can modify existing policy optimization algorithms with augmented rewards on value function.
Remark : Applying Fenchel transformation to the variance regularized objective, however, at first glance, seems to make the augmented rewards dependent on the policy itself, since r̃(s, a) depends on the dual variables ν(s, a) as well. This can make the rewards non-stationary, thereby the policy maximization step cannot be solved directly via the maximum return objective. However, as we discuss next, the dual variables for minimizing the variance term has a closed form solution ν(s, a), and thereby does not lead to any non-stationarity in the rewards, due to the alternating minimization and maximization steps.
Variance Minimization Step : Fixing the policy πθ, the dual variables ν can be obtained using closed form solution given by ν(s, a) = ω(s, a) · r̃(s, a). Note that directly optimizing for the target policies using batch data, however, requires a fixed point estimate of the stationary distribution corrections, which can be achieved using existing algorithms (Nachum et al., 2019a; Liu et al., 2018). Solving the optimization objective additionally requires estimating the state-action distribution ratio, ω(s, a) = dπ(s,a)dD(s,a) . Recently, several works have proposed estimating the stationary distribution ratio, mostly for the off-policy evaluation case in infinite horizon setting (Zhang et al., 2020; Uehara & Jiang, 2019). We include a detailed discussion of this in appendix E.4.
Algorithm : Our proposed variance regularization approach with returns under stationary distribution corrections for offline optimization can be built on top of any existing batch off-policy optimization algorithms. We summarize our contributions in Algorithm 1. Implementing our algorithm requires estimating the state-action distribution ratio, followed by the closed form estimate of the dual variable ν. The augmented stationary reward with the dual variables can then be used to compute the regularized value function Qπλ(s, a). The policy improvement step involves maximizing the variance regularized value function, e.g with BCQ (Fujimoto et al., 2019).
4 THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of offline policy optimization algorithms in terms of policy improvement guarantees under fixed dataset D. Following then, we demonstrate that using the variance regularizer leads to a lower bound for our policy optimization objective, which leads to a pessimistic exploitation approach for offline algorithms.
Algorithm 1 Offline Variance Regularizer Initialize critic Qφ, policy πθ, network ωψ and regularization weighting λ; learning rate η for t = 1 to T do
Estimate distribution ratio ωψ(s, a) using any existing DICE algorithm Estimate the dual variable ν(s, a) = ωψ(s, a) · r̃(s, a) Calculate augmented rewards r̃(s, a) using equation 7 Policy improvement step using any offline policy optimization algorithm with augmented rewards r̃(s, a) : θt = θt−1 + η∇θJ(θ, φ, ψ, ν)
end for
4.1 VARIANCE OF MARGINALIZED IMPORTANCE SAMPLING AND IMPORTANCE SAMPLING
We first show in lemma 1 that the variance of rewards under stationary distribution corrections can similarly be upper bounded based on the variance of importance sampling corrections. We emphasize that in the off-policy setting under distribution corrections, the variance is due to the estimation of the density ratio compared to the importance sampling corrections. Lemma 1. The following inequality holds between the variance of per-step rewards under stationary distribution corrections, denoted by VD(π) and the variance of episodic returns with importance sampling corrections VP(π)
VP(π) ≤ VD(π) (1− γ)2
(9)
The proof for this and discussions on the variance of episodic returns compared to per-step rewards under occupancy measures is provided in the appendix B.1.
4.2 POLICY IMPROVEMENT BOUND UNDER VARIANCE REGULARIZATION
In this section, we establish performance improvement guarantees (Kakade & Langford, 2002) for variance regularized value function for policy optimization. Let us first recall that the performance improvement can be written in terms of the total variation DTV divergence between state distributions (Touati et al., 2020) (for more discussions on the performance bounds, see appendix C) Lemma 2. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (10)
where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|, and Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]. For detailed proof and discussions, see appendix C. Instead of considering the divergence between state visitation distributions, consider having access to both state-action samples generated from the environment. To avoid importance sampling corrections we can further considers the bound on the objective based on state-action visitation distributions, where we have an upper bound following from (Nguyen et al., 2010) : DTV(dπ′(s)||dπ(s)) ≤ DTV(dπ′(s, a)||dπ(s, a)). Following Pinsker’s inequality, we have: J(π′) ≥ J(π)+Es∼dπ(s),a∼π′(|s) [ Aπ(s, a) ] − πE(s,a)∼dπ(s,a) [√ DKL(dπ′(s, a)||dπ(s, a)) ] (11)
Furthermore, we can exploit the relation between KL, total variation (TV) and variance through the variational representation of divergence measures. Recall that the total divergence between P and Q distributions is given by : DTV(p, q) = 12 ∑ x |p(x)−q(x)|. We can use the variational representation of the divergence measure. Denoting dπ(s, a) = βπ′(s, a), we have
DTV(βπ′ ||βπ) = supf :S×A→R [ E(s,a)∼βπ′ [f(s, a)]− E(s,a)∼β(s,a)[φ ∗ ◦ f(s, a)] ]
(12)
where φ∗ is the convex conjugate of φ and f is the dual function class based on the variational representation of the divergence. Similar relations with the variational representations of f-divergences have also been considered in (Nachum et al., 2019b; Touati et al., 2020). We can finally obtain a bound for the policy improvement following this relation, in terms of the per-step variance: Theorem 1. For all policies π and π′, and the corresponding state-action visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of rewards under state-action occupancy measures.
J(π′)− J(π) ≥ Es∼dπ(s),a∼π′(a|s)[A π(s, a)]− Var(s,a)∼dπ(s,a)
[ f(s, a) ] (13)
where f(s, a) is the dual function class from the variational representation of variance.
Proof. For detailed proof, see appendix C.1.
4.3 LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we show that augmenting the policy optimization objective with a variance regularizer leads to a lower bound to the original optimization objectiven J(πθ). Following from (Metelli et al., 2018), we first note that the variance of marginalized importance weighting with distribution corrections can be written in terms of the α−Renyi divergence. Let p and q be two probability measures, such that the Renyi divergence is Fα = 1α log ∑ x q(x) ( p(x) q(x) )α . When α = 1, this leads to the well-known KL divergence F1(p||q) = FKL(p||q). Let us denote the state-action occupancy measures under π and dataset D as dπ and dD. The variance of state-action distribution ratios is Var(s,a)∼dD(s,a)[ωπ/D(s, a)]. When α = 2 for the Renyi divergence, we have : Var(s,a)∼dD(s,a)[ωπ/D(s, a)] = F2(dπ||dD)− 1 (14) Following from (Metelli et al., 2018), and extending results from importance sampling ρ to marginalized importance sampling ωπ/D, we provide the following result that bounds the variance of the approximated density ratio ω̂π/D in terms of the Renyi divergence :
Lemma 3. Assuming that the rewards of the MDP are bounded by a finite constant, ||r||∞ ≤ Rmax. Given random variable samples (s, a) ∼ dD(s, a) from dataset D, for any N > 0, the variance of marginalized importance weighting can be upper bounded as :
Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N ||r||2∞F2(dπ||dD) (15)
See appendix D.1 for more details. Following this, our goal is to derive a lower bound objective to our off-policy optimization problem. Concentration inequalities has previously been studied for both off-policy evaluation (Thomas et al., 2015a) and optimization (Thomas et al., 2015b). In our case, we can adapt the concentration bound derived from Cantelli’s ineqaulity and derive the following result based on variance of marginalized importance sampling. Under state-action distribution corrections, we have the following lower bound to the off-policy policy optimization objective with stationary state-action distribution corrections
Theorem 2. Given state-action occupancy measures dπ and dD, and assuming bounded reward functions, for any 0 < δ ≤ 1 and N > 0, we have with probability at least 1− δ that :
J(π) ≥ E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] − √ 1− δ δ Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (16)
Equation 16 shows the lower bound policy optimization objective under risk-sensitive variance constraints. The key to our derivation in equation 16 of theorem 2 shows that given off-policy batch data collected with behaviour policy µ(a|s), we are indeed optimizing a lower bound to the policy optimization objective, which is regularized with a variance term to minimize the variance in batch off-policy learning.
5 EXPERIMENTAL RESULTS ON BENCHMARK OFFLINE CONTROL TASKS
Experimental Setup : We demonstrate the significance of variance regularizer on a range of continuous control domains (Todorov et al., 2012) based on fixed offline datasets from (Fu et al., 2020), which is a standard benchmark for offline algorithms. To demonstrate the significance of our variance regularizer OVR, we mainly use it on top of the BCQ algorithm and compare it with other existing baselines, using the benchmark D4RL (Fu et al., 2020) offline datasets for different tasks and off-policy distributions. Experimental results are given in table 1
Performance on Optimal and Medium Quality Datasets : We first evaluate the performance of OVR when the dataset consists of optimal and mediocre logging policy data. We collected the dataset using a fully (expert) or partially (medium) trained SAC policy. We build our algorithm OVR on top of BCQ, denoted by BCQ + VAR. Note that the OVR algorithm can be agnostic to the behaviour policy too for computing the distribution ratio (Nachum et al., 2019a) and the variance. We observe that even
though performance is marginally improved with OVR under expert settings, since the demonstrations are optimal itself, we can achieve significant improvements under medium dataset regime. This is because OVR plays a more important role when there is larger variance due to distribution mismatch between the data logging and target policy distributions. Experimental results are shown in first two columns of figure 1.
Performance on Random and Mixed Datasets : We then evaluate the performance on random datasets, i.e, the worst-case setup when the data logging policy is a random policy, as shown in the last two columns of figure 1. As expected, we observe no improvements at all, and even existing baselines such as BCQ (Fujimoto et al., 2019) can work poorly under random dataset setting. When we collect data using a mixture of random and mediocre policy, denoted by mixed, the performance is again improved for OVR on top of BCQ, especially for the Hopper and Walker control domains. We provide additional experimental results and ablation studies in appendix E.1.
6 RELATED WORKS
We now discuss related works in offline RL, for evaluation and opimization, and its relations to variance and risk sensitive algorithms. We include more discussions of related works in appendix A.1. In off-policy evaluation, per-step importance sampling (Precup et al., 2000; 2001) have previously been used for off-policy evaluation function estimators. However, this leads to high variance estimators, and recent works proposed using marginalized importance sampling, for estimating stationary state-action distribution ratios (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2019), to reduce variance but with additional bias. In this work, we build on the variance of marginalized IS, to develop variance risk sensitive offline policy optimization algorithm. This is in contrast to prior works on variance constrained online actor-critic (A. & Ghavamzadeh, 2016; Chow et al., 2017; Castro et al., 2012) and relates to constrained policy optimization methods (Achiam et al., 2017; Tessler et al., 2019).
For offline policy optimization, several works have recently addressed the overestimation problem in batch RL (Fujimoto et al., 2019; Kumar et al., 2019; Wu et al., 2019b), including the very recently proposed Conservative Q-Learning (CQL) algorithm (Kumar et al., 2020). Our work is done in parallel to CQL, due to which we do not include it as a baseline in our experiments. CQL learns a value function which is guaranteed to lower-bound the true value function. This helps prevent value over-estimation for out-of-distribution (OOD) actions, which is an important issue in offline RL. We
.
note that our approach is orthogonal to CQL in that CQL introduces a regularizer on the state action value function Qπ(s, a) based on the Bellman error (the first two terms in equation 2 of CQL), while we introduce a variance regularizer on the stationary state distribution dπ(s). Since the value of a policy can be expressed in two ways - either through Qπ(s, a) or occupancy measures dπ(s), both CQL and our paper are essentially motivated by the same objective of optimizing a lower bound on J(θ), but through different regularizers. Our work can also be considered similar to AlgaeDICE (Nachum et al., 2019b), since we introduce a variance regularizer based on the distribution corrections, instead of minimizing the f-divergence between stationary distributions in AlgaeDICE. Both our work and AlgaeDICE considers the dual form of the policy optimization objective in the batch setting, where similar to the Fenchel duality trick on our variance term, AlgaeDICE instead uses the variational form, followed by the change of variables tricks, inspired from (Nachum et al., 2019a) to handle their divergence measure.
7 DISCUSSION AND CONCLUSION
We proposed a new framework for offline policy optimization with variance regularization called OVR, to tackle high variance issues due to distribution mismatch in offline policy optimization. Our work provides a practically feasible variance constrained actor-critic algorithm that avoids double sampling issues in prior variance risk sensitive algorithms (Castro et al., 2012; A. & Ghavamzadeh, 2016). The presented variance regularizer leads to a lower bound to the true offline optimization objective, thus leading to pessimistic value function estimates, avoiding both high variance and overestimation problems in offline RL. Experimentally, we evaluate the significance of OVR on standard benchmark offline datasets, with different data logging off-policy distributions, and show that OVR plays a more significant role when there is large variance due to distribution mismatch. While we only provide a variance related risk sensitive approach for offline RL, for future work, it would be interesting other risk sensitive approaches (Chow & Ghavamzadeh, 2014; Chow et al., 2017) and examine its significance in batch RL. We hope our proposed variance regularization framework would provide new opportunities for developing practically robust risk sensitive offline algorithms.
A APPENDIX : ADDITIONAL DISCUSSIONS
A.1 EXTENDED RELATED WORK
Other related works : Several other prior works have previously considered the batch RL setting (Lange et al., 2012) for off-policy evaluation, counterfactual risk minimization (Swaminathan & Joachims, 2015a;b), learning value based methods such as DQN (Agarwal et al., 2019), and others (Kumar et al., 2019; Wu et al., 2019b). Recently, batch off-policy optimization has also been introduced to reduce the exploitation error (Fujimoto et al., 2019) and for regularizing with arbitrary behaviour policies (Wu et al., 2019b). However, due to the per-step importance sampling corrections on episodic returns (Precup et al., 2000), off-policy batch RL methods is challenging. In this work, we instead consider marginalized importance sampling corrections and correct for the stationary stateaction distributions (Nachum et al., 2019a; Uehara & Jiang, 2019; Zhang et al., 2020). Additionally, under the framework of Constrained MDPs (Altman & Asingleutility, 1999), risk-sensitive and constrained actor-critic algorithms have been proposed previously (Chow et al., 2017; Chow & Ghavamzadeh, 2014; Achiam et al., 2017). However, these works come with their own demerits, as they mostly require minimizing the risk (ie, variance) term, where finding the gradient of the variance term often leads a double sampling issue (Baird, 1995). We avoid this by instead using Fenchel duality (Boyd & Vandenberghe, 2004), inspired from recent works (Nachum & Dai, 2020; Dai et al., 2018) and cast risk constrained actor-critic as a max-min optimization problem. Our work is closely related to (Bisi et al., 2019), which also consider per-step variance of returns, w.r.t state occupancy measures in the on-policy setting, while we instead consider the batch off-policy optimization setting with per-step rewards w.r.t stationary distribution corrections.
Constrained optimization has previously been studied in in reinforcement learning for batch policy learning (Le et al., 2019), and optimization (Achiam et al., 2017), mostly under the framework of constrained MDPs (Altman & Asingleutility, 1999). In such frameworks, the cumulative return objective is augmented with a set of constraints, for safe exploration (Garcı́a et al., 2015; Perkins & Barto, 2003; Ding et al., 2020), or to reduce risk measures (Chow et al., 2017; A. & Fu, 2018; Castro et al., 2012). Batch learning algorithms (Lange et al., 2012) have been considered previously for counterfactual risk minimization and generalization (Swaminathan & Joachims, 2015a;b) and policy evaluation (Thomas et al., 2015a; Li et al., 2015), although little has been done for constrained offline policy based optimization. This raises the question of how can we learn policies in RL from fixed offline data, similar to supervised or unsupervised learning.
A.2 WHAT MAKES OFFLINE OFF-POLICY OPTIMIZATION DIFFICULT?
Offline RL optimization algorithms often suffer from distribution mismatch issues, since the underlying data distribution in the batch data may be quite different from the induced distribution under target policies. Recent works (Fujimoto et al., 2019; Kumar et al., 2019; Agarwal et al., 2019; Kumar et al., 2020) have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates can also have large variance, due to which existing online off-policy algorithms (Haarnoja et al., 2018; Lillicrap et al., 2016; Fujimoto et al., 2018) may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints.
B APPENDIX : PER-STEP VERSUS EPISODIC VARIANCE OF RETURNS
Following from (Castro et al., 2012; A. & Ghavamzadeh, 2016), let us denote the returns with importance sampling corrections in the off-policy learning setting as :
Dπ(s, a) = T∑ t=0 γtr(st, at) ( T∏ t=1 π(at | st) µ(at | st) ) | s0 = s, a0 = a, τ ∼ µ (17)
From this definition in equation 17, the action-value function, with off-policy trajectory-wise importance correction is Qπ(s, a) = E(s,a)∼dµ(s,a)[Dπ(s, a)], and similarly the value function can be defined as : V π(s) = Es∼dµ(s)[Dπ(s)]. For the trajectory-wise importance corrections, we can
define the variance of the returns, similar to (A. & Fu, 2018) as : VP(π) = E(s,a)∼dµ(s,a)[D π(s, a)2]− E(s,a)∼dµ(s,a)[D π(s, a)]2 (18) where note that as in (Sobel, 1982), equation 18 also follows a Bellman like equation, although due to lack of monotonocitiy as required for dynamic programming (DP), such measures cannot be directly optimized by standard DP algorithms (A. & Fu, 2018).
In contrast, if we consider the variance of returns with stationary distribution corrections (Nachum et al., 2019a; Liu et al., 2018), rather than the product of importance sampling ratios, the variance term involves weighting the rewards with the distribution ratio ωπ/µ. Typically, the distribution ratio is approximated using a separate function class (Uehara & Jiang, 2019), such that the variance can be written as : Wπ(s, a) = ωπ/D(s, a) · r(s, a) | s = s, a ∼ π(· | s), (s, a) ∼ dD(s, a) (19) where we denote D as the data distribution in the fixed dataset, collected by either a known or unknown behaviour policy. The variance of returns under occupancy measures is therefore given by :
VD(π) = E(s,a)∼dD(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dD(s,a) [ Wπ(s, a) ]2 (20)
where note that the variance expression in equation 20 depends on the square of the per-step rewards with distribution correction ratios. We denote this as the dual form of the variance of returns, in contrast to the primal form of the variance of expected returns (Sobel, 1982).
Note that even though the variance term under episodic per-step importance sampling corrections in equation 18 is equivalent to the variance with stationary distribution corrections in equation 20, following from (Bisi et al., 2019), considering per-step corrections, we will show that the variance with distribution corrections indeed upper bounds the variance of importance sampling corrections. This is an important relationship, since constraining the policy improvement step under variance constraints with occupancy measures therefore allows us to obtain a lower bound to the offline optimization objective, similar to (Kumar et al., 2020).
B.1 PROOF OF LEMMA 1 : VARIANCE INEQUALITY
Following from (Bisi et al., 2019), we show that the variance of per-step rewards under occupancy measures, denoted by VD(π) upper bounds the variance of episodic returns VP(π).
VP(π) ≤ VD(π) (1− γ)2
(21)
Proof. Proof of Lemma 1 following from (Bisi et al., 2019) is as follows. Denoting the returns, as above, but for the on-policy case with trajectories under π, as Dπ(s, a) = ∑∞ t=0 γ tr(st, at), and
denoting the return objective as J(π) = Es0∼ρ,at∼π(·|st),s′∼P [ Dπ(s, a) ] , the variance of episodic
returns can be written as : VP(π) = E(s,a)∼dπ(s,a) [( Dπ(s, a)− J(π)
(1− γ)
)2] (22)
= E(s,a)∼dπ(s,a) [ (Dπ(s, a))2 ] + J(π)
(1− γ)2 − 2J(π) (1− γ) E(s,a)∼dπ(s,a)
[ Dπ(s, a) ] (23)
= E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (24)
Similarly, denoting returns under occupancy measures as Wπ(s, a) = dπ(s, a)r(s, a), and the returns under occupancy measures, equivalently written as J(π) = E(s,a)∼dπ(s,a)[r(s, a)] based on the primal and dual forms of the objective (Uehara & Jiang, 2019; Nachum & Dai, 2020), we can equivalently write the variance as :
VD(π) = E(s,a)∼dπ(s,a) [( r(s, a)− J(π) )2] (25)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] + J(π)2 − 2J(π)E(s,a)∼dπ(s,a)[r(s, a)] (26)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] − J(π)2 (27)
Following from equation 22 and 25, we therefore have the following inequality : (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( ∞∑ t=0 γt )( ∞∑ t=0 γtr(st, at) 2 )]
(28)
= (1− γ)Es0∼ρ,a∼π [ ∞∑ t=0 γtr(st, at) 2 ]
(29)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] (30)
where the first line follows from Cauchy-Schwarz inequality. This concludes the proof.
We can further extend lemma 1, for off-policy returns under stationary distribution corrections (ie, marginalized importance sampling) compared importance sampling. Recall that we denote the variance under stationary distribution corrections as :
VD(π) = E(s,a)∼dD(s,a) [( ωπ/D(s, a) · r(s, a)− J(π) )2] (31)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ] − J(π)2 (32)
where J(π) = E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] . We denote the episodic returns with importance
sampling corrections as : Dπ = ∑T t=0 γ trtρ0:t. The variance, as denoted earlier is given by :
VP(π) = E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (33)
We therefore have the following inequality (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( T∑ t=0 γt )( T∑ t=0 γtr(st, at) 2 )( T∏ t=0 π(at|st) µD(at|st) )2] = (1− γ)Es0∼ρ,a∼π
[ ∞∑ t=0 γtr(st, at) 2 ( T∏ t=0 π(at|st) µD(at|st) )2] (34)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ]
(35) which shows that lemma 1 also holds for off-policy returns with stationary distribution corrections.
B.2 DOUBLE SAMPLING FOR COMPUTING GRADIENTS OF VARIANCE
The gradient of the variance term often leads to the double sampling issue, thereby making it impractical to use. This issue has also been pointed out by several other works (A. & Ghavamzadeh, 2016; Castro et al., 2012; Chow et al., 2017), since the variance involves the squared of the objective function itself. Recall that we have:
VD(θ) = E(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]}2 (36)
The gradient of the variance term is therefore : ∇θVD(θ) = ∇θE(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − 2 · { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} · ∇θ { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} (37)
where equation 37 requires multiple samples to compute the expectations in the second term. To see why this is true, let us denote J(θ) = EdD(s,a) [ ωπ/D(s, a)︸ ︷︷ ︸ ·r(s, a)IS(ω,πθ) ] where we have IS(ω, πθ) as the returns in short form. The variance of the returns with the stationary state-action distribution corrections can therefore be written as :
VD(θ) = EdD(s,a) [ IS(ω, πθ)2 ] ︸ ︷︷ ︸
(a)
−EdD(s,a) [ IS(ω, πθ) ]2 ︸ ︷︷ ︸
(b)
(38)
We derive the gradient of each of the terms in (a) and (b) in equation 38 below. First, we find the gradient of the variance term w.r.t θ : ∇θEdD(s,a) [ IS(ω, πθ)2 ] = ∇θ ∑ s,a dD(s, a)IS(ω, πθ)2 = ∑ s,a dD(s, a)∇θIS(ω, πθ)2
= ∑ s,a dD(s, a) · 2 · IS(ω, πθ) · IS(ω, πθ) · ∇θ log πθ(a | s)
= 2 · ∑ s,a dD(s, a)IS(ω, πθ)2∇θ log πθ(a | s)
= 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s)
] (39)
Equation 39 interestingly shows that the variance of the returns w.r.t πθ has a form similar to the policy gradient term, except the critic estimate in this case is given by the importance corrected returns, since IS(ω, πθ) = [ωπ/D(s, a) · r(s, a)]. We further find the gradient of term (b) from equation 38. Finding the gradient of this second term w.r.t θ is therefore :
∇θEdD(s,a) [ IS(ω, πθ) ]2 = ∇θJ(θ)2 = 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (40)
Overall, the expression for the gradient of the variance term is therefore : ∇θVD(θ) = 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s) ] − 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (41)
The variance gradient in equation 41 is difficult to estimate in practice, since it involves both the gradient of the objective and the objective J(θ) itself. This is known to have the double sampling issue (Baird, 1995) which requires separate independent rollouts. Previously, (Castro et al., 2012) tackled the variance of the gradient term using simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992), where we can keep running estimates of both the return and the variance term, and use a two time scale algorithm for computing the gradient of the variance regularizer with per-step importance sampling corrections.
B.3 ALTERNATIVE DERIVATION : VARIANCE REGULARIZATION VIA FENCHEL DUALITY
In the derivation of our algorithm, we applied the Fenchel duality trick to the second term of the variance expression 25. An alternative way to derive the proposed algorithm would be to see what happens if we apply the Fenchel duality trick to both terms of the variance expression. This might be useful since equation 41 requires evaluating both the gradient terms and the actual objective J(θ), due to the analytical expression of the form ∇θJ(θ) · J(θ), hence suffering from a double sampling issue. In general, the Fenchel duality is given by :
x2 = max y (2xy − y2) (42) and applying Fenchel duality to both the terms, since they both involve squared terms, we get :
EdD(s,a) [ IS(ω, πθ)2 ] ≡ EdD(s,a) [ max y { 2 · IS(ω, πθ) · y(s, a)− y(s, a)2 }] = 2 ·max
y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]} (43) Similarly, applying Fenchel duality to the second (b) term we have :
EdD(s,a) [ IS(ω, πθ) ]2 = max
ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (44)
Overall, we therefore have the variance term, after applying Fenchel duality as follows, leading to an overall objective in the form maxymaxν VD(θ), which we can use as our variance regularizer
VD(θ) = 2 ·max y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]}
−max ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (45)
Using the variance of stationary distribution correction returns as a regularizer, we can find the gradient of the variance term w.r.t θ as follows, where the gradient terms dependent on the dual variables y and ν are 0.
∇θVD(θ) = 2 · ∇θEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 0− 2 · ∇θEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] + 0
= 2·EdD(s,a) [ IS(ω, πθ)·y(s, a)·∇θ log πθ(a | s) ] −2·EdD(s,a) [ IS(ω, πθ)·ν(s, a)·∇θ log πθ(a | s) ]
= 2 · EdD(s,a) [ IS(ω, πθ) · ∇θ log πθ(a | s) · { y(s, a)− ν(s, a) }] (46)
Note that from equation 46, the two terms in the gradient is almost equivalent, and the difference comes only from the difference between the two dual variables y(s, a) and ν(s, a). Note that our variance term also requires separately maximizing the dual variables, both of which has the following closed form updates :
∇νVD(θ) = −2 · ∇νEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] +∇νν2 = 0 (47)
Solving which exactly, leads to the closed form solution ν(s, a) = EdD(s,a) [ IS(ω, πθ) ] . Similarly,
we can also solve exactly using a closed form solution for the dual variables y, such that : ∇yVD(θ) = 2 · ∇yEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 2 · ∇yEdD(s,a) [ y(s, a)2 ] = 0 (48)
Solving which exactly also leads to the closed form solution, such that y(s, a) = 12 · IS(ω, πθ) = 1 2 · dπ(s,a) dµ(s,a)
· r(s, a). Note that the exact solutions for the two dual variables are similar to each other, where ν(s, a) is the expectation of the returns with stationary distribution corrections, whereas y(s, a) is only the return from a single rollout.
C APPENDIX : MONOTONIC PERFORMANCE IMPROVEMENT GUARANTEES
UNDER VARIANCE REGULARIZATION
We provide theoretical analysis and performance improvements bounds for our proposed variance constrained policy optimization approach. Following from (Kakade & Langford, 2002; Schulman et al., 2015; Achiam et al., 2017), we extend existing performance improvement guarantees based on the stationary state-action distributions instead of only considering the divergence between the current policy and old policy. We show that existing conservative updates in algorithms (Schulman et al., 2015) can be considered for both state visitation distributions and the action distributions, as similarly pointed by (Achiam et al., 2017). We can then adapt this for the variance constraints instead of the divergence constraints. According to the performance difference lemma (Kakade & Langford, 2002), we have that, for all policies π and π′ :
J(π′)− J(π) = Es∼dπ′ ,a∼π′ [A π(s, a)] (49)
which implies that when we maximize 49, it will lead to an improved policy π′ with policy improvement guarantees over the previous policy π. We can write the advantage function with variance augmented value functions as :
Aπλ = Q π λ(s, a)− V πλ (s) = Es′∼P [ r(s, a)− λ(r(s, a)− J(π))2 + γV πλ (s′)− V πλ (s) ] However, equation 49 is often difficult to maximize directly, since it additionally requires samples from π′ and dπ′ , and often a surrogate objective is instead proposed by (Kakade & Langford, 2002). Following (Schulman et al., 2015), we can therefore obtain a bound for the performance difference based on the variance regularized advantage function :
J(π′) ≥ J(π) + Es∼dπ(s),a∼π′(a|s) [ Aπλ(s, a) ] (50)
where we have the augmented rewards for the advantage function, and by following Fenchel duality for the variance, can avoid policy dependent reward functions. Otherwise, we have the augmented rewards for value functions as r̃(s, a) = r(s, a)− λ(r(s, a)− J(π))2. This however suggests that the performance difference does not hold without proper assumptions (Bisi et al., 2019). We can therefore obtain a monotonic improvement guarantee by considering the KL divergence between
policies : Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)] (51) which ignores the changes in the state distribution dπ′ due to the improved policy π′. (Schulman et al., 2015) optimizes the surrogate objectives Lπ(π′) while ensuring that the new policy π′ stays close to the current policy π, by imposing a KL constraint (Es∼dπ [DKL(π′(· | s)||π(· | s)] ≤ δ). The performance difference bound, based on the constraint between π and π′ as in TRPO (Schulman et al., 2015) is given by :
Lemma 4. The performance difference lemma in (Schulman et al., 2015), where α = DmaxTV = maxsDTV(π, π′)
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 (DmaxTV (π′||π))2 (52)
where = maxs,a |Aπ(s, a)|, which is usually denoted with α, where
The performance improvement bxound in (Schulman et al., 2015) can further be written in terms of the KL divergence by following the relationship between total divergence (TV) and KL, which follows from Pinsker’s inequality, DTV(p||q)2 ≤ DKL(p||q), to get the following improvement bound :
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 DKL(π′||π) (53)
We have a performance difference bound in terms of the state distribution shift dπ′ and dπ. This justifies that Lπ(π′) is a sensible lower bound to J(π′) as long as there is a total variation distance between dπ′ and dπ which ensures that the policies π′ and π stay close to each other. Finally, following from (Achiam et al., 2017), we obtain the following lower bound, which satisfies policy improvement guarantees :
J(π′) ≥ Lπ(π′)− 2γ π
1− γ Es∼dπ [DTV(π′(· | s)||π(· | s))] (54)
Equation 53 and 54 assumes that there is no state distribution shift between π′ and π. However, if we explicitly assume state distribution changes, dπ′ and dπ due to π′ and π respectively, then we have the following performance improvement bound :
Lemma 5. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (55) where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|
which can be further written in terms of the surrogate objective Lπ(π′) as : J(π′) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ) = Lπ(π′)− πDTV(dπ′ ||dπ) (56)
C.1 PROOF OF THEOREM 1 : POLICY IMPROVEMENT BOUND WITH VARIANCE REGULARIZATION
Proof. We provide derivation for theorem 1. Recall that for all policies π′ and π, and corresponding state visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections
J(π′)− J(π) ≥ Es∼dπ,a∼π′ [ Aπ(s, a) ] − Vars∼dπ,a∼π [ f(s, a) ] (57)
where f(s, a) is the dual function class, for the divergence between dπ′(s, a) and dπ(s, a) Following from Pinsker’s inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ)
≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− π √ DKL(dπ′ ||dπ) (58)
Following from (Schulman et al., 2015), we can alternately write this follows, where we further apply the variational form of TV J(π′) ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [ DTV(dπ′ ||dπ)2 ] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [( max f {Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)]}
)2] ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Es∼dπ
[( Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)] )2] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f
{( Es∼dπ,a∼π[f(s, a)]− Es∼dπ,a∼π[Es∼dπ,a∼π[f(s, a)]] )2} = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Vars∼dπ,a∼π
[ f(s, a) ] (59)
Therefore the policy improvement bound depends on maximizing the variational representation f(s, a) of the f-divergence to guaranetee improvements from J(π) to J(π′). This therefore leads to the stated result in theorem 1.
D APPENDIX : LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
D.1 PROOF OF LEMMA 3
Recalling lemma 3 which states that, the proof of this follows from (Metelli et al., 2018). We extend this for marginalized importance weighting, and include here for completeness. Note that compared to importance weighting, which leads to an unbiased estimator as in (Metelli et al., 2018), correcting for the state-action occupancy measures leads to a biased estimator, due to the approximation ω̂π/D. However, for our analysis, we only require to show a lower bound objective, and therefore do not provide any bias variance analysis as in off-policy evaluation.
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (60)
Proof. Assuming that state action samples are drawn i.i.d from the dataset D, we can write : Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N Var(s1,a1)∼dD(s,a) [ dπ(s1, a1 dD(s1, a1) · r(s1, a1) ]
≤ 1 N E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2]
≤ 1 N ||r||2∞E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2] = 1 N ||r||2∞F2(dπ||dD) (61)
D.2 PROOF OF THEOREM 2:
First let us recall the stated theorem 2. By constraining the off-policy optimization problem with variance constraints, we have the following lower bound to the optimization objective with stationary state-action distribution corrections
J(π) ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) r(s, a)]− √ 1− δ δ Var(s,a)∼dµ(s,a)[ dπ(s, a) dD(s, a) r(s, a)] (62)
Proof. The proof for the lower bound objective can be obtained as follows. We first define a relationship between the variance and the α-divergence with α = 2, as also similarly noted in (Metelli et al., 2018). Given we have batch samples D, and denoting the state-action distribution correction with ωπ/D(s, a), we can write from lemma 3 :
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (63)
where the per-step estimator with state-action distribution corrections is given by ωπ/D(s, a) · r(s, a). Here, the reward function r(s, a) is a bounded function, and for any N > 0 the variance of the
per-step reward estimator with distribution corrections can be upper bounded by the Renyi-divergence (α = 2). Finally, following from (Metelli et al., 2018) and using Cantelli’s inequality, we have with probability at least 1− δ where 0 < δ < 1 :
Pr ( ωπ/D − J(π) ≥ λ ) ≤ 1
1 + λ 2 Var(s,a)∼dD(s,a)[ωπ/D(s,a)·r(s,a)] (64)
and by using δ = 1 1+ λ 2
Var(s,a)∼dD(s,a) [ωπ/D(s,a)·r(s,a)]
we get that with probability at least 1− δ, we have:
J(π) = E(s,a)∼dπ(s,a) ≥ E(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)]− √ 1− δ δ
Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (65)
where we can further replace the variance term with α = 2 for the Renyi divergence to conclude the proof for the above theorem. We can further write the lower bound for for α-Renyi divergence, following the relation between variance and Renyi-divergence for α = 2 as :
J(π) = E(s,a)∼dπ(s,a)[r(s, a)] ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) · r(s, a)]− ||r||∞
√ (1− δ)d2(dπ||dD)
δN This hints at the similarity between our proposed variance regularized objective with that of other related works including AlgaeDICE (Nachum et al., 2019b) which uses a f-divergence D f (dπ||dD) between stationary distributions.
E APPENDIX : ADDITIONAL EXPERIMENTAL RESULTS
E.1 EXPERIMENTAL ABLATION STUDIES
In this section, we present additional results using state-action experience replay weightings on existing offline algorithms, and analysing the significance of our variance regularizer on likelihood corrected offline algorithms. Denoting ω(s, a) for the importance weighting of state-action occupancy measures based on samples in the experience replay buffer, we can modify existing offline algorithms to account for state-action distribution ratios.
The ablation experimental results using the Hopper control benchmark are summarized in figure 2. The same base BCQ algorithm is used with a modified objective for BCQ (Fujimoto et al., 2019) where the results for applying off-policy importance weights are denoted as “BCQ+I.W.”. We employ the same technique to obtain ω(s, a) for both the baseline and for adding variance regularization as described. The results suggest that adding the proposed per-step variance regularization scheme significantly outperforms just importance weighting the expected rewards for off-policy policy learning.
E.2 EXPERIMENTAL RESULTS IN CORRUPTED NOISE SETTINGS
We additionally consider a setting where the batch data is collected from a noisy environment, i.e, in settings with corrupted rewards, r → r + , where ∼ N (0, 1). Experimental results are presented in figures 1, 3. From our results, we note that using OVR on top of BCQ (Fujimoto et al., 2019), we can achieve significantly better performance with variance minimization, especially when the agent is given sub-optimal demonstrations. We denote it as medium (when the dataset was collected by a half trained SAC policy) or a mixed behaviour logging setting (when the data logging policy is a mixture of random and SAC policy). This is also useful for practical scalability, since often data collection is
expensive from an expert policy. We add noise to the dataset, to examine the significance of OVR under a noisy corrupted dataset setting.
E.3 EXPERIMENTAL RESULTS ON SAFETY BENCHMARK TASKS
Safety Benchmarks for Variance as Risk : We additionally consider safety benchmarks for control tasks, to analyse the significance of variance regularizer as a risk constraint in offline policy optimization algorithms. Our results are summarized in table 3.
E.4 DISCUSSIONS ON OFFLINE OFF-POLICY OPTIMIZATION WITH STATE-ACTION DISTRIBUTION RATIOS
In this section, we include several alternatives by which we can compute the stationary state-action distribution ratio, borrowing from recent works (Uehara & Jiang, 2019; Nachum et al., 2019a).
Off-Policy Optimization with Minimax Weight Learning (MWL) : We discuss other possible ways of optimizing the batch off-policy optimization objective while also estimating the state-action density ratio. Following from (Uehara & Jiang, 2019) we further modify the off-policy optimization part of the objective J(θ) in L(θ, λ) as a min-max objective, consisting of weight learning ωπ/D
Table 3: Results on the Safety-Gym environments Ray et al.. We report the mean and S.D. of episodic returns and costs over five random seeds and 1 million timesteps. The goal of the agent is to maximize the episodic return, while minimizing the cost incurred.
PointGoal1 PointGoal2 Reward Cost Reward Cost
BCQ 43.1 ± 0.3 137.0 ± 3.6 32.7± 0.7 468.2 ± 9.1 BCQ+OVR 44.2 ± 0.3 127.1 ± 4.0 33.2 ± 0.7 453.9 ± 7.3
PointButton1 PointButton2 Reward Cost Reward Cost
BCQ 30.9 ± 2.2 330.8 ± 8.3 18.1 ± 1.1 321.6 ± 4.1 BCQ+OVR 30.7 ± 2.3 321.5 ± 6.8 19.6 ± 1.0 305.7 ± 6.1
and optimizing the resulting objective J(θ, ω). We further propose an overall policy optimization objective, where a single objective can be used for estimating the distribution ratio, evaluating the critic and optimizing the resulting objective. We can write the off-policy optimization objective with its equivalent starting state formulation, such that we have :
EdD(s,a) [ ωπθ/D(s, a) · r(s, a) ] = (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (66)
Furthermore, following Bellman equation, we expect to have E[r(s, a)] = E[Qπ(s, a)−γQπ(s′, a′)] EdD(s,a) [ ωπθ/D(s, a)·{Q π(s, a)−γQπ(s′, a′)} ] = (1−γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (67)
We can therefore write the overall objective as : J(ω, πθ, Q) = EdD(s,a) [ ωπθ/D(s, a) · {Q π(s, a)− γQπ(s′, a′)} ]
− (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (68)
This is similar to the MWL objective in (Uehara & Jiang, 2019) except we instead consider the bias reduced estimator, such that accurate estimates of Q or ω will lead to reduced bias of the value function estimation. Furthermore, note that in the first part of the objective J(πθ, ω,Q)2, we can further use entropy regularization for smoothing the objective, since instead ofQπ(s′, a′) in the target, we can replace it with a log-sum-exp and considering the conjugate of the entropy regularization term, similar to SBEED (Dai et al., 2018). This would therefore give the first part of the objective as an overall min-max optimization problem :
J(ω, πθ) = Edµ(s,a) [ ωπθ/D(s, a) · {r(s, a) + γQ π(s′, a′) + τ log π(a | s)−Qπ(s, a)} ]
+ (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (69)
such that from our overall constrained optimization objective for maximizing θ, we have turned it into a min-max objective, for estimating the density ratios, estimating the value function and maximizing the policies
ω∗π/D, Q ∗, π∗ = argmin
ω,Q argmax π J(πθ, ω,Q)
2 (70)
where the fixed point solution for the density ratio can be solved by minimizing the objective :
ω∗π/D = argmin ω
L(ωπ/D, Q)2 = Edµ(s,a) [ {γω(s, a) ·Qπ(s′, a′)− ω(s, a)Qπ(s, a)}+
(1− γ)Eβ(s,a)Qπ(s0, a0) ] (71)
DualDICE : In contrast to MWL (Uehara & Jiang, 2019), DualDICE (Nachum et al., 2019a) introduces dual variables through the change of variables trick, and minimizes the Bellman residual of the dual variables ν(s, a) to estimate the ratio, such that : ν∗(s, a)− Bπν∗(s, a) = ωπ/D(s, a) (72) the solution to which can be achieved by optimizing the following objective
min ν L(ν) = 1 2 EdD
[ (ν − Bπν)(s, a)2 ] − (1− γ)Es0,a0∼β(s,a) [ ν(s0, a0) ] (73)
Minimizing Divergence for Density Ratio Estimation : The distribution ratio can be estimated using an objective similar to GANs (Goodfellow et al., 2014; Ho & Ermon, 2016), as also similarly
proposed in (Kostrikov et al., 2019).
max h G(h) = E(s,a)∼dD
[ log h(s, a) ] + E(s,a)∼dπ [ log(1− h(s, a)) ] (74)
where h is the discriminator class, discriminating between samples from dD and dπ. The optimal discriminator satisfies :
log h∗(s, a)− log(1− h∗(s, a)) = log dD(s, a) dπ(s, a)
(75)
The optimal solution of the discriminator is therefore equivalent to minimizing the divergence between dπ and dD, since the KL divergence is given by :
−DKL(dπ||dD) = E(s,a)∼dπ [ log dD(s, a)
dπ(s, a)
] (76)
Additionally, using the Donsker-Varadhan representation, we can further write the KL divergence term as :
−DKL(dπ||dD) = min x
logE(s,a)∼dD [ expx(s, a) ] − E(s,a)∼dπ [ x(s, a) ] (77)
such that now, instead of the discriminator class h, we learn the function class x, the optimal solution to which is equivalent to the distribution ratio plus a constant
x∗(s, a) = log dπ(s, a)
dD(s, a) (78)
However, note that both the GANs like objective | 1. What is the main contribution of the paper, and how does it address the problem of offline reinforcement learning?
2. What are the strengths and weaknesses of the proposed algorithm, particularly in terms of its computational complexity and performance gain?
3. How does the paper regularize policy updates by the variance of a return, and why is this approach novel?
4. What are some potential issues with the paper's presentation and notation, and how might these affect the reviewer's understanding of the work?
5. Are there any concerns regarding the paper's experimental results or comparisons with other works in the field?
6. How does the paper relate to previous research on offline reinforcement learning, such as the use of Fenchel duality to avoid double-sampling problems?
7. Can the authors provide more clarity on certain equations and notations used throughout the paper, such as ωπ/D, dD, E(s,a)∼dD[ω(s,a)r(s,a)], and Qπ(s,a)?
8. How does Lemma 1 differ from Lemma 1 of Bisi et al., 2019, and what is its significance in understanding the variance regularizer?
9. What is the relationship between the beginning of Section 4.2 and Theorem 1, and how does the proof of Theorem 1 rely on a different inequality?
10. Are there any errors or typos in the paper that need to be addressed, such as the disappearance of ϕ in Equation 12 or the change of dπ′ to dπ in the fourth line of Equation 59?
11. Can the authors provide more explanation for Section 4.3 and the random variables involved in Theorem 2?
12. What is the final objective being optimized in Algorithm 1, and how does it relate to Equation 6? | Review | Review
Disclaimer: this paper was assigned to me as an emergency review paper, so I might be missing something important.
Evaluation
I recommend rejection. I think the paper presents an interesting idea to regularize policy updates by the variance of a return. Besides, it proves that the return-variance-regularized policy update leads to a monotonic policy improvement, which I think is novel. That being said, the paper seems to have several issues that decreased my evaluation. First, the clarity of the paper is really low: many typos, ambiguous notations, and confusing presentation of a new algorithm. Second, I could not understand some parts of the paper, especially a proof of the monotonic policy improvement theorem. Third, it is unclear why the proposed regularizer is suitable for offline RL.
Paper Summary
Offline RL is (said to be) a key to enable RL applications to real world problems. The paper proposes a new algorithm, which regularizes policy updates by the variance of return, for offline RL. The paper proves a monotonic policy improvement when the return-variance-regularized policy update is used. Experiments show moderate performance gain by the regularization.
Strong Points
Interesting idea to regularize policy updates by the variance of a return
Moderate performance gain, especially when offline dataset is obtained by a suboptimal policy.
Weak Points
The paper is not well-written. It contains typos, unclear sentences, and ambiguous notations.
Some parts of the paper, like the proof of the monotonic policy improvement theorem, seem to contain mistakes. (I might be wrong, though.)
The performance gain seems to be moderate, despite a high complexity of the computation of the proposed regularizer.
Comments to the Authors
If I am missing something, please feel free to let me know. As noted above, I could not spare much time to review the paper.
The paper is not well-written. Please revise it again. (I don't point out all ambiguous notations and typos, but later I point out some serious ones.) Besides, please revise the references section. It refers to arxiv versions of papers that were accepted to conferences.
In page 3,
ω
π
/
D
suddenly appears. What does it mean? Maybe
ω
π
/
μ
?
What does
s
∼
D
mean? Does it mean
s
∼
d
μ
? Since the dataset
D
contains states visited by
μ
, simply drawing states from
D
will be different from
d
μ
.
What is
d
D
in, for example, Equation 4?
The idea to use the Fenchel duality to avoid double-sampling problem seems to be not new (cf SBEED paper). While the paper mentions AlgaeDICE as an algorithm using a similar technique, it does not mention SBEED about the use of Fenchel duality. Why?
In the beginning of Section 3.4, the min-max problem is being solved by repeating the inner minimization and outer maximization. As far as I remember (I might be wrong, though!), this way of solving a min-max problem might not find the exact solution. Isn't it a problem?
In Equation 6, it seems that
Q
π
(
s
,
a
)
is rewritten as
E
(
s
,
a
)
∼
d
D
[
ω
(
s
,
a
)
r
(
s
,
a
)
]
. According to the notation of the paper, isn't
E
(
s
,
a
)
∼
d
D
[
ω
(
s
,
a
)
r
(
s
,
a
)
]
be
(
1
−
γ
)
E
s
0
∼
β
,
a
t
∼
π
[
∑
t
=
0
∞
γ
t
r
(
s
t
,
a
t
)
]
≠
Q
π
(
s
,
a
)
?
Is Lemma 1 different from Lemma 1 of Bisi et al, 2019? Also what is its meaning? Why is is useful to understand the variance regularizer?
How is the beginning of Section 4.2 related to Theorem 1? Theorem 1 seems to be derived based on a different inequality in its proof.
As far as I remember, the dual form of the total variation is
sup
f
∈
C
‘
1
E
A
∼
P
f
(
A
)
−
E
B
∼
Q
f
(
B
)
, where
C
1
is the space of all continuous functions bounded by
1
. Therefore, we don't need
ϕ
and can explicitly state the space of
f
in Equation 12. Am I wrong or missing something?
Why is there no sup over f in Theorem 1?
As for Equation 59, how do you get the first line? In addition, do you need
E
s
∼
d
π
? In the second line, the first
E
s
∼
d
π
′
,
a
∼
π
is sampling an action from
π
. Isn't it
π
′
? In the fourth line,
d
π
′
changed to
d
π
. How is it possible?
I don't fully understand what Section 4.3 does. In addition, what are random variables in Theorem 2?
I don't understand what the final objective to be optimized is. In Algorithm 1,
J
(
θ
,
ϕ
,
ψ
,
ν
)
appears. Is it the same as Equation 6? |
ICLR | Title
Offline Policy Optimization with Variance Regularization
Abstract
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing algorithms.
1 INTRODUCTION
Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics (Levine et al., 2016) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets, similar to supervised learning, avoiding continual interaction with the environment, which could be problematic for safety and feasibility reasons. However, significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates, a problem encountered by most off-policy RL algorithms (Precup et al., 2000). A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch, leading the agent in data regions where its behavior is poor Fujimoto et al. (2019). Recently there has been some progress in offline RL (Kumar et al., 2019; Wu et al., 2019b; Fujimoto et al., 2019), trying to tackle both of these problems.
In this work, we study the problem of offline policy optimization with variance minimization. To avoid overly optimistic value function estimates, we propose to learn value functions under variance constraints, leading to a pessimistic estimation, which can significantly help offline RL algorithms, especially under large distribution mismatch. We propose a framework for variance minimization in offline RL, such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions.
We develop a novel approach for variance regularized offline actor-critic algorithms, which we call Offline Variance Regularizer (OVR). The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates. Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates, by instead considering the variance of stationary distribution corrections with per-step rewards, and using the Fenchel transformation (Boyd & Vandenberghe, 2004) to formulate a minimax optimization objective. This allows minimizing variance constraints by instead optimizing dual variables, resulting in simply an augmented reward objective for variance regularized value functions.
We show that even with variance constraints, we can ensure policy improvement guarantees, where the regularized value function leads to a lower bound on the true value function, which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling, which has been a major bottleneck in scaling up variance-constrained
actor-critic algorithms in prior work A. & Ghavamzadeh (2016); A. & Fu (2018). Practically, our algorithm is easy to implement, since it simply involves augmenting the rewards with the dual variables only, such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms. We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains. Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random, or when it is very different from the data distributions encountered during training.
2 PRELIMINARIES AND BACKGROUND
We consider an infinite horizon MDP as (S,A,P, γ) where S is the set of states, A is the set of actions, P is the transition dynamics and γ is the discount factor. The goal of reinforcement learning is to maximize the expected return J (π) = Es∼dβ [V π(s)], where V π(s) is the value function V π(s) = E[ ∑∞ t=0 γ
tr(st, at) | s0 = s], and β is the initial state distribution. Considering parameterized policies πθ(a|s), the goal is maximize the returns by following the policy gradient (Sutton et al., 1999), based on the performance metric defined as :
J(πθ) = Es0∼ρ,a0∼π(s0) [ Qπθ (s0, a0) ] = E(s,a)∼dπθ (s,a) [ r(s, a) ] (1)
where Qπ(s, a) is the state-action value function, since V π(s) = ∑ a π(a|s)Qπ(s, a). The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ, where dπ(s, a) is the state-action occupancy measure, such that the normalized state-action visitation distribution under policy π is defined as : dπ(s, a) = (1 − γ) ∑∞ t=0 γ
tP (st = s, at = a|s0 ∼ β, a ∼ π(s0)). The equality in equation 1 holds and can be equivalently written based on the linear programming (LP) formulation in RL (see (Puterman, 1994; Nachum & Dai, 2020) for more details). In this work, we consider the off-policy learning problem under a fixed dataset D which contains s, a, r, s′ tuples under a known behaviour policy µ(a|s). Under the off-policy setting, importance sampling (Precup et al., 2000) is often used to reweight the trajectory under the behaviour data collecting policy, such as to get unbiased estimates of the expected returns. At each time step, the importance sampling correction π(at|st)µ(at|st) is used to compute the expected return under the entire trajectory as
J(π) = (1 − γ)E(s,a)∼dµ(s,a)[ ∑T t=0 γ tr(st, at) (∏T t=1 π(at|st) µ(at|st ) ]. Recent works (Fujimoto et al., 2019) have demonstrated that instead of importance sampling corrections, maximizing value functions directly for deterministic or reparameterized policy gradients (Lillicrap et al., 2016; Fujimoto et al., 2018) allows learning under fixed datasets, by addressing the over-estimation problem, by maximizing the objectives of the form maxθ Es∼D [ Qπθ (s, πθ(s) ] .
3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION
In this section, we first present our approach based on variance of stationary distribution corrections, compared to importance re-weighting of episodic returns in section 3.1. We then present a derivation of our approach based on Fenchel duality on the variance, to avoid the double sampling issue, leading to a variance regularized offline optimization objective in section 3.2. Finally, we present our algorithm in 1, where the proposed regularizer can be used in any existing offline RL algorithm.
3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS
In this work, we consider the variance of rewards under occupancy measures in offline policy optimization. Let us denote the returns as Dπ = ∑T t=0 γ
tr(st, at), such that the value function is V π = Eπ[Dπ]. The 1-step importance sampling ratio is ρt = π(at|st)µ(at|st) , and the T-steps ratio can be denoted ρ1:T = ∏T t=1 ρt. Considering per-decision importance sampling (PDIS) (Precup et al.,
2000), the returns can be similarly written as Dπ = ∑T t=0 γ
trtρ0:t. The variance of episodic returns, which we denote by VP(π), with off-policy importance sampling corrections can be written as : VP(π) = Es∼β,a∼µ(·|s),s′∼P(·|s,a) [( Dπ(s, a)− J(π) )2] .
Instead of importance sampling, several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2020; Uehara & Jiang, 2019), which can lead to lower variance estimators at the cost of introducing bias. Denoting the stationary distribution ratios as ω(s, a) = dπ(s,a)dµ(s,a) , the returns can be written as Wπ(s, a) = ω(s, a)r(s, a). The variance of marginalized IS is :
VD(π) = E(s,a)∼dµ(s,a) [( Wπ(s, a)− J(π) )2] = E(s,a)∼dµ(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dµ(s,a) [ Wπ(s, a) ]2 (2)
Our key contribution is to first consider the variance of marginalized IS VD(π) itself a as risk constraints, in the offline batch optimization setting. We show that constraining the offline policy optimization objective with variance of marginalized IS, and using the Fenchel-Legendre transformation on VD(π) can help avoid the well-known double sampling issue in variance risk constrained RL (for more details on how to compute the gradient of the variance term, see appendix B). We emphasize that the variance here is solely based on returns with occupancy measures, and we do not consider the variance due to the inherent stochasticity of the MDP dynamics.
3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE
We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D (which we denote ω for short for clarity) in the offline fixed dataset D setting:
max πθ
J(πθ) := Es∼D [ Qπθ (s, πθ(s)) ] − λVD(ω, πθ) (3)
where λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization (or equivalently variance risk minimization). The max-return objective under Qπθ (s, a) has been considered in prior works in offline policy optimization (Fujimoto et al., 2019; Kumar et al., 2019). We show that this form of regularizer encourages variance minimization in offline policy optimization, especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ.
3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY
At first, equation 3 seems to be difficult to optimize, especially for minimizing the variance regularization w.r.t θ. This is because finding the gradient of V(ω, πθ) would lead to the double sampling issue since it contains the squared of the expectation term. The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2, for regularizing policy optimization objective with variance of marginalized importance sampling. Applying Fenchel duality, x2 = maxy(2xy − y2), to the second term of variance expression, we can transform the variance minimization problem into an equivalent maximization problem, by introducing the dual variables ν(s, a). We have the Fenchel conjugate of the variance term as :
V(ω, πθ) = max ν
{ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + E(s,a)∼dD
[ ω(s, a)r(s, a)2 ]} = max
ν E(s,a)∼dD
[ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + ω(s, a)r(s, a)2 ] (4) Regularizing the policy optimization objective with variance under the Fenchel transformation, we therefore have the overall max-min optimization objective, explicitly written as :
max θ min ν J(πθ, ν) := Es∼D
[ Qπθ (s, πθ(s)) ] −λE(s,a)∼dD [( − 1
2 ν2+ν ·ω ·r+ω ·r2
) (s, a) ] (5)
3.4 AUGMENTED REWARD OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we explain the key steps that leads to the policy improvement step being an augmented variance regularized reward objective. The variance minimization step involves estimating the stationary distribution ration (Nachum et al., 2019a), and then simply computing the closed form solution for the dual variables. Fixing dual variables ν, to update πθ, note that this leads to a standard maximum return objective in the dual form, which can be equivalently solved in the primal form,
using augmented rewards. This is because we can write the above above in the dual form as : J(πθ, ν, ω) := E(s,a)∼dD(s,a) [ ω(s, a) · r(s, a)− λ ( − 1
2 ν2 + ν · ω · r + ω · r2
) (s, a) ] = E(s,a)∼dD(s,a) [ ω(s, a) · ( r − λ · ν · r − λ · r2 ) (s, a) + λ
2 ν(s, a)2 ] = E(s,a)∼dD(s,a) [ ω(s, a) · r̃(s, a) + λ
2 ν(s, a)2
] (6)
where we denote the augmented rewards as : r̃(s, a) ≡ [r − λ · ν · r − λ · r2](s, a) (7)
The policy improvement step can either be achieved by directly solving equation 6 or by considering the primal form of the objective with respect to Qπθ (s, πθ) as in (Fujimoto et al., 2019; Kumar et al., 2019). However, solving equation 6 directly can be troublesome, since the policy gradient step involves findinding the gradient w.r.t ω(s, a) = dπθ (s,a)dD(s,a) too, where the distribution ratio depends on dπθ (s, a). This means that the gradient w.r.t θ would require finding the gradient w.r.t to the normalized discounted occupancy measure, ie,∇θdπθ (s). Instead, it is therefore easier to consider the augmented reward objective, using r̃(s, a) as in equation 7 in any existing offline policy optimization algorithm, where we have the variance regularized value function Q̃πθ (s, a).
Note that as highlighted in (Sobel, 1982), the variance of returns follows a Bellman-like equation. Following this, (Bisi et al., 2019) also pointed to a Bellman-like solution for variance w.r.t occupancy measures. Considering variance of the form in equation 2, and the Bellman-like equation for variance, we can write the variance recursively as a Bellman equation:
VπD(s, a) = ( r(s, a)− J(π) )2 + γEs′∼P,a′∼π′(·|s′) [ VπD(s′, a′) ] (8)
Since in our objective, we augment the policy improvement step with the variance regularization term, we can write the augmented value function as Qπλ(s, a) := Q
π(s, a)− λVπD(s, a). This suggests we can modify existing policy optimization algorithms with augmented rewards on value function.
Remark : Applying Fenchel transformation to the variance regularized objective, however, at first glance, seems to make the augmented rewards dependent on the policy itself, since r̃(s, a) depends on the dual variables ν(s, a) as well. This can make the rewards non-stationary, thereby the policy maximization step cannot be solved directly via the maximum return objective. However, as we discuss next, the dual variables for minimizing the variance term has a closed form solution ν(s, a), and thereby does not lead to any non-stationarity in the rewards, due to the alternating minimization and maximization steps.
Variance Minimization Step : Fixing the policy πθ, the dual variables ν can be obtained using closed form solution given by ν(s, a) = ω(s, a) · r̃(s, a). Note that directly optimizing for the target policies using batch data, however, requires a fixed point estimate of the stationary distribution corrections, which can be achieved using existing algorithms (Nachum et al., 2019a; Liu et al., 2018). Solving the optimization objective additionally requires estimating the state-action distribution ratio, ω(s, a) = dπ(s,a)dD(s,a) . Recently, several works have proposed estimating the stationary distribution ratio, mostly for the off-policy evaluation case in infinite horizon setting (Zhang et al., 2020; Uehara & Jiang, 2019). We include a detailed discussion of this in appendix E.4.
Algorithm : Our proposed variance regularization approach with returns under stationary distribution corrections for offline optimization can be built on top of any existing batch off-policy optimization algorithms. We summarize our contributions in Algorithm 1. Implementing our algorithm requires estimating the state-action distribution ratio, followed by the closed form estimate of the dual variable ν. The augmented stationary reward with the dual variables can then be used to compute the regularized value function Qπλ(s, a). The policy improvement step involves maximizing the variance regularized value function, e.g with BCQ (Fujimoto et al., 2019).
4 THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of offline policy optimization algorithms in terms of policy improvement guarantees under fixed dataset D. Following then, we demonstrate that using the variance regularizer leads to a lower bound for our policy optimization objective, which leads to a pessimistic exploitation approach for offline algorithms.
Algorithm 1 Offline Variance Regularizer Initialize critic Qφ, policy πθ, network ωψ and regularization weighting λ; learning rate η for t = 1 to T do
Estimate distribution ratio ωψ(s, a) using any existing DICE algorithm Estimate the dual variable ν(s, a) = ωψ(s, a) · r̃(s, a) Calculate augmented rewards r̃(s, a) using equation 7 Policy improvement step using any offline policy optimization algorithm with augmented rewards r̃(s, a) : θt = θt−1 + η∇θJ(θ, φ, ψ, ν)
end for
4.1 VARIANCE OF MARGINALIZED IMPORTANCE SAMPLING AND IMPORTANCE SAMPLING
We first show in lemma 1 that the variance of rewards under stationary distribution corrections can similarly be upper bounded based on the variance of importance sampling corrections. We emphasize that in the off-policy setting under distribution corrections, the variance is due to the estimation of the density ratio compared to the importance sampling corrections. Lemma 1. The following inequality holds between the variance of per-step rewards under stationary distribution corrections, denoted by VD(π) and the variance of episodic returns with importance sampling corrections VP(π)
VP(π) ≤ VD(π) (1− γ)2
(9)
The proof for this and discussions on the variance of episodic returns compared to per-step rewards under occupancy measures is provided in the appendix B.1.
4.2 POLICY IMPROVEMENT BOUND UNDER VARIANCE REGULARIZATION
In this section, we establish performance improvement guarantees (Kakade & Langford, 2002) for variance regularized value function for policy optimization. Let us first recall that the performance improvement can be written in terms of the total variation DTV divergence between state distributions (Touati et al., 2020) (for more discussions on the performance bounds, see appendix C) Lemma 2. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (10)
where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|, and Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]. For detailed proof and discussions, see appendix C. Instead of considering the divergence between state visitation distributions, consider having access to both state-action samples generated from the environment. To avoid importance sampling corrections we can further considers the bound on the objective based on state-action visitation distributions, where we have an upper bound following from (Nguyen et al., 2010) : DTV(dπ′(s)||dπ(s)) ≤ DTV(dπ′(s, a)||dπ(s, a)). Following Pinsker’s inequality, we have: J(π′) ≥ J(π)+Es∼dπ(s),a∼π′(|s) [ Aπ(s, a) ] − πE(s,a)∼dπ(s,a) [√ DKL(dπ′(s, a)||dπ(s, a)) ] (11)
Furthermore, we can exploit the relation between KL, total variation (TV) and variance through the variational representation of divergence measures. Recall that the total divergence between P and Q distributions is given by : DTV(p, q) = 12 ∑ x |p(x)−q(x)|. We can use the variational representation of the divergence measure. Denoting dπ(s, a) = βπ′(s, a), we have
DTV(βπ′ ||βπ) = supf :S×A→R [ E(s,a)∼βπ′ [f(s, a)]− E(s,a)∼β(s,a)[φ ∗ ◦ f(s, a)] ]
(12)
where φ∗ is the convex conjugate of φ and f is the dual function class based on the variational representation of the divergence. Similar relations with the variational representations of f-divergences have also been considered in (Nachum et al., 2019b; Touati et al., 2020). We can finally obtain a bound for the policy improvement following this relation, in terms of the per-step variance: Theorem 1. For all policies π and π′, and the corresponding state-action visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of rewards under state-action occupancy measures.
J(π′)− J(π) ≥ Es∼dπ(s),a∼π′(a|s)[A π(s, a)]− Var(s,a)∼dπ(s,a)
[ f(s, a) ] (13)
where f(s, a) is the dual function class from the variational representation of variance.
Proof. For detailed proof, see appendix C.1.
4.3 LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we show that augmenting the policy optimization objective with a variance regularizer leads to a lower bound to the original optimization objectiven J(πθ). Following from (Metelli et al., 2018), we first note that the variance of marginalized importance weighting with distribution corrections can be written in terms of the α−Renyi divergence. Let p and q be two probability measures, such that the Renyi divergence is Fα = 1α log ∑ x q(x) ( p(x) q(x) )α . When α = 1, this leads to the well-known KL divergence F1(p||q) = FKL(p||q). Let us denote the state-action occupancy measures under π and dataset D as dπ and dD. The variance of state-action distribution ratios is Var(s,a)∼dD(s,a)[ωπ/D(s, a)]. When α = 2 for the Renyi divergence, we have : Var(s,a)∼dD(s,a)[ωπ/D(s, a)] = F2(dπ||dD)− 1 (14) Following from (Metelli et al., 2018), and extending results from importance sampling ρ to marginalized importance sampling ωπ/D, we provide the following result that bounds the variance of the approximated density ratio ω̂π/D in terms of the Renyi divergence :
Lemma 3. Assuming that the rewards of the MDP are bounded by a finite constant, ||r||∞ ≤ Rmax. Given random variable samples (s, a) ∼ dD(s, a) from dataset D, for any N > 0, the variance of marginalized importance weighting can be upper bounded as :
Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N ||r||2∞F2(dπ||dD) (15)
See appendix D.1 for more details. Following this, our goal is to derive a lower bound objective to our off-policy optimization problem. Concentration inequalities has previously been studied for both off-policy evaluation (Thomas et al., 2015a) and optimization (Thomas et al., 2015b). In our case, we can adapt the concentration bound derived from Cantelli’s ineqaulity and derive the following result based on variance of marginalized importance sampling. Under state-action distribution corrections, we have the following lower bound to the off-policy policy optimization objective with stationary state-action distribution corrections
Theorem 2. Given state-action occupancy measures dπ and dD, and assuming bounded reward functions, for any 0 < δ ≤ 1 and N > 0, we have with probability at least 1− δ that :
J(π) ≥ E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] − √ 1− δ δ Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (16)
Equation 16 shows the lower bound policy optimization objective under risk-sensitive variance constraints. The key to our derivation in equation 16 of theorem 2 shows that given off-policy batch data collected with behaviour policy µ(a|s), we are indeed optimizing a lower bound to the policy optimization objective, which is regularized with a variance term to minimize the variance in batch off-policy learning.
5 EXPERIMENTAL RESULTS ON BENCHMARK OFFLINE CONTROL TASKS
Experimental Setup : We demonstrate the significance of variance regularizer on a range of continuous control domains (Todorov et al., 2012) based on fixed offline datasets from (Fu et al., 2020), which is a standard benchmark for offline algorithms. To demonstrate the significance of our variance regularizer OVR, we mainly use it on top of the BCQ algorithm and compare it with other existing baselines, using the benchmark D4RL (Fu et al., 2020) offline datasets for different tasks and off-policy distributions. Experimental results are given in table 1
Performance on Optimal and Medium Quality Datasets : We first evaluate the performance of OVR when the dataset consists of optimal and mediocre logging policy data. We collected the dataset using a fully (expert) or partially (medium) trained SAC policy. We build our algorithm OVR on top of BCQ, denoted by BCQ + VAR. Note that the OVR algorithm can be agnostic to the behaviour policy too for computing the distribution ratio (Nachum et al., 2019a) and the variance. We observe that even
though performance is marginally improved with OVR under expert settings, since the demonstrations are optimal itself, we can achieve significant improvements under medium dataset regime. This is because OVR plays a more important role when there is larger variance due to distribution mismatch between the data logging and target policy distributions. Experimental results are shown in first two columns of figure 1.
Performance on Random and Mixed Datasets : We then evaluate the performance on random datasets, i.e, the worst-case setup when the data logging policy is a random policy, as shown in the last two columns of figure 1. As expected, we observe no improvements at all, and even existing baselines such as BCQ (Fujimoto et al., 2019) can work poorly under random dataset setting. When we collect data using a mixture of random and mediocre policy, denoted by mixed, the performance is again improved for OVR on top of BCQ, especially for the Hopper and Walker control domains. We provide additional experimental results and ablation studies in appendix E.1.
6 RELATED WORKS
We now discuss related works in offline RL, for evaluation and opimization, and its relations to variance and risk sensitive algorithms. We include more discussions of related works in appendix A.1. In off-policy evaluation, per-step importance sampling (Precup et al., 2000; 2001) have previously been used for off-policy evaluation function estimators. However, this leads to high variance estimators, and recent works proposed using marginalized importance sampling, for estimating stationary state-action distribution ratios (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2019), to reduce variance but with additional bias. In this work, we build on the variance of marginalized IS, to develop variance risk sensitive offline policy optimization algorithm. This is in contrast to prior works on variance constrained online actor-critic (A. & Ghavamzadeh, 2016; Chow et al., 2017; Castro et al., 2012) and relates to constrained policy optimization methods (Achiam et al., 2017; Tessler et al., 2019).
For offline policy optimization, several works have recently addressed the overestimation problem in batch RL (Fujimoto et al., 2019; Kumar et al., 2019; Wu et al., 2019b), including the very recently proposed Conservative Q-Learning (CQL) algorithm (Kumar et al., 2020). Our work is done in parallel to CQL, due to which we do not include it as a baseline in our experiments. CQL learns a value function which is guaranteed to lower-bound the true value function. This helps prevent value over-estimation for out-of-distribution (OOD) actions, which is an important issue in offline RL. We
.
note that our approach is orthogonal to CQL in that CQL introduces a regularizer on the state action value function Qπ(s, a) based on the Bellman error (the first two terms in equation 2 of CQL), while we introduce a variance regularizer on the stationary state distribution dπ(s). Since the value of a policy can be expressed in two ways - either through Qπ(s, a) or occupancy measures dπ(s), both CQL and our paper are essentially motivated by the same objective of optimizing a lower bound on J(θ), but through different regularizers. Our work can also be considered similar to AlgaeDICE (Nachum et al., 2019b), since we introduce a variance regularizer based on the distribution corrections, instead of minimizing the f-divergence between stationary distributions in AlgaeDICE. Both our work and AlgaeDICE considers the dual form of the policy optimization objective in the batch setting, where similar to the Fenchel duality trick on our variance term, AlgaeDICE instead uses the variational form, followed by the change of variables tricks, inspired from (Nachum et al., 2019a) to handle their divergence measure.
7 DISCUSSION AND CONCLUSION
We proposed a new framework for offline policy optimization with variance regularization called OVR, to tackle high variance issues due to distribution mismatch in offline policy optimization. Our work provides a practically feasible variance constrained actor-critic algorithm that avoids double sampling issues in prior variance risk sensitive algorithms (Castro et al., 2012; A. & Ghavamzadeh, 2016). The presented variance regularizer leads to a lower bound to the true offline optimization objective, thus leading to pessimistic value function estimates, avoiding both high variance and overestimation problems in offline RL. Experimentally, we evaluate the significance of OVR on standard benchmark offline datasets, with different data logging off-policy distributions, and show that OVR plays a more significant role when there is large variance due to distribution mismatch. While we only provide a variance related risk sensitive approach for offline RL, for future work, it would be interesting other risk sensitive approaches (Chow & Ghavamzadeh, 2014; Chow et al., 2017) and examine its significance in batch RL. We hope our proposed variance regularization framework would provide new opportunities for developing practically robust risk sensitive offline algorithms.
A APPENDIX : ADDITIONAL DISCUSSIONS
A.1 EXTENDED RELATED WORK
Other related works : Several other prior works have previously considered the batch RL setting (Lange et al., 2012) for off-policy evaluation, counterfactual risk minimization (Swaminathan & Joachims, 2015a;b), learning value based methods such as DQN (Agarwal et al., 2019), and others (Kumar et al., 2019; Wu et al., 2019b). Recently, batch off-policy optimization has also been introduced to reduce the exploitation error (Fujimoto et al., 2019) and for regularizing with arbitrary behaviour policies (Wu et al., 2019b). However, due to the per-step importance sampling corrections on episodic returns (Precup et al., 2000), off-policy batch RL methods is challenging. In this work, we instead consider marginalized importance sampling corrections and correct for the stationary stateaction distributions (Nachum et al., 2019a; Uehara & Jiang, 2019; Zhang et al., 2020). Additionally, under the framework of Constrained MDPs (Altman & Asingleutility, 1999), risk-sensitive and constrained actor-critic algorithms have been proposed previously (Chow et al., 2017; Chow & Ghavamzadeh, 2014; Achiam et al., 2017). However, these works come with their own demerits, as they mostly require minimizing the risk (ie, variance) term, where finding the gradient of the variance term often leads a double sampling issue (Baird, 1995). We avoid this by instead using Fenchel duality (Boyd & Vandenberghe, 2004), inspired from recent works (Nachum & Dai, 2020; Dai et al., 2018) and cast risk constrained actor-critic as a max-min optimization problem. Our work is closely related to (Bisi et al., 2019), which also consider per-step variance of returns, w.r.t state occupancy measures in the on-policy setting, while we instead consider the batch off-policy optimization setting with per-step rewards w.r.t stationary distribution corrections.
Constrained optimization has previously been studied in in reinforcement learning for batch policy learning (Le et al., 2019), and optimization (Achiam et al., 2017), mostly under the framework of constrained MDPs (Altman & Asingleutility, 1999). In such frameworks, the cumulative return objective is augmented with a set of constraints, for safe exploration (Garcı́a et al., 2015; Perkins & Barto, 2003; Ding et al., 2020), or to reduce risk measures (Chow et al., 2017; A. & Fu, 2018; Castro et al., 2012). Batch learning algorithms (Lange et al., 2012) have been considered previously for counterfactual risk minimization and generalization (Swaminathan & Joachims, 2015a;b) and policy evaluation (Thomas et al., 2015a; Li et al., 2015), although little has been done for constrained offline policy based optimization. This raises the question of how can we learn policies in RL from fixed offline data, similar to supervised or unsupervised learning.
A.2 WHAT MAKES OFFLINE OFF-POLICY OPTIMIZATION DIFFICULT?
Offline RL optimization algorithms often suffer from distribution mismatch issues, since the underlying data distribution in the batch data may be quite different from the induced distribution under target policies. Recent works (Fujimoto et al., 2019; Kumar et al., 2019; Agarwal et al., 2019; Kumar et al., 2020) have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates can also have large variance, due to which existing online off-policy algorithms (Haarnoja et al., 2018; Lillicrap et al., 2016; Fujimoto et al., 2018) may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints.
B APPENDIX : PER-STEP VERSUS EPISODIC VARIANCE OF RETURNS
Following from (Castro et al., 2012; A. & Ghavamzadeh, 2016), let us denote the returns with importance sampling corrections in the off-policy learning setting as :
Dπ(s, a) = T∑ t=0 γtr(st, at) ( T∏ t=1 π(at | st) µ(at | st) ) | s0 = s, a0 = a, τ ∼ µ (17)
From this definition in equation 17, the action-value function, with off-policy trajectory-wise importance correction is Qπ(s, a) = E(s,a)∼dµ(s,a)[Dπ(s, a)], and similarly the value function can be defined as : V π(s) = Es∼dµ(s)[Dπ(s)]. For the trajectory-wise importance corrections, we can
define the variance of the returns, similar to (A. & Fu, 2018) as : VP(π) = E(s,a)∼dµ(s,a)[D π(s, a)2]− E(s,a)∼dµ(s,a)[D π(s, a)]2 (18) where note that as in (Sobel, 1982), equation 18 also follows a Bellman like equation, although due to lack of monotonocitiy as required for dynamic programming (DP), such measures cannot be directly optimized by standard DP algorithms (A. & Fu, 2018).
In contrast, if we consider the variance of returns with stationary distribution corrections (Nachum et al., 2019a; Liu et al., 2018), rather than the product of importance sampling ratios, the variance term involves weighting the rewards with the distribution ratio ωπ/µ. Typically, the distribution ratio is approximated using a separate function class (Uehara & Jiang, 2019), such that the variance can be written as : Wπ(s, a) = ωπ/D(s, a) · r(s, a) | s = s, a ∼ π(· | s), (s, a) ∼ dD(s, a) (19) where we denote D as the data distribution in the fixed dataset, collected by either a known or unknown behaviour policy. The variance of returns under occupancy measures is therefore given by :
VD(π) = E(s,a)∼dD(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dD(s,a) [ Wπ(s, a) ]2 (20)
where note that the variance expression in equation 20 depends on the square of the per-step rewards with distribution correction ratios. We denote this as the dual form of the variance of returns, in contrast to the primal form of the variance of expected returns (Sobel, 1982).
Note that even though the variance term under episodic per-step importance sampling corrections in equation 18 is equivalent to the variance with stationary distribution corrections in equation 20, following from (Bisi et al., 2019), considering per-step corrections, we will show that the variance with distribution corrections indeed upper bounds the variance of importance sampling corrections. This is an important relationship, since constraining the policy improvement step under variance constraints with occupancy measures therefore allows us to obtain a lower bound to the offline optimization objective, similar to (Kumar et al., 2020).
B.1 PROOF OF LEMMA 1 : VARIANCE INEQUALITY
Following from (Bisi et al., 2019), we show that the variance of per-step rewards under occupancy measures, denoted by VD(π) upper bounds the variance of episodic returns VP(π).
VP(π) ≤ VD(π) (1− γ)2
(21)
Proof. Proof of Lemma 1 following from (Bisi et al., 2019) is as follows. Denoting the returns, as above, but for the on-policy case with trajectories under π, as Dπ(s, a) = ∑∞ t=0 γ tr(st, at), and
denoting the return objective as J(π) = Es0∼ρ,at∼π(·|st),s′∼P [ Dπ(s, a) ] , the variance of episodic
returns can be written as : VP(π) = E(s,a)∼dπ(s,a) [( Dπ(s, a)− J(π)
(1− γ)
)2] (22)
= E(s,a)∼dπ(s,a) [ (Dπ(s, a))2 ] + J(π)
(1− γ)2 − 2J(π) (1− γ) E(s,a)∼dπ(s,a)
[ Dπ(s, a) ] (23)
= E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (24)
Similarly, denoting returns under occupancy measures as Wπ(s, a) = dπ(s, a)r(s, a), and the returns under occupancy measures, equivalently written as J(π) = E(s,a)∼dπ(s,a)[r(s, a)] based on the primal and dual forms of the objective (Uehara & Jiang, 2019; Nachum & Dai, 2020), we can equivalently write the variance as :
VD(π) = E(s,a)∼dπ(s,a) [( r(s, a)− J(π) )2] (25)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] + J(π)2 − 2J(π)E(s,a)∼dπ(s,a)[r(s, a)] (26)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] − J(π)2 (27)
Following from equation 22 and 25, we therefore have the following inequality : (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( ∞∑ t=0 γt )( ∞∑ t=0 γtr(st, at) 2 )]
(28)
= (1− γ)Es0∼ρ,a∼π [ ∞∑ t=0 γtr(st, at) 2 ]
(29)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] (30)
where the first line follows from Cauchy-Schwarz inequality. This concludes the proof.
We can further extend lemma 1, for off-policy returns under stationary distribution corrections (ie, marginalized importance sampling) compared importance sampling. Recall that we denote the variance under stationary distribution corrections as :
VD(π) = E(s,a)∼dD(s,a) [( ωπ/D(s, a) · r(s, a)− J(π) )2] (31)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ] − J(π)2 (32)
where J(π) = E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] . We denote the episodic returns with importance
sampling corrections as : Dπ = ∑T t=0 γ trtρ0:t. The variance, as denoted earlier is given by :
VP(π) = E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (33)
We therefore have the following inequality (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( T∑ t=0 γt )( T∑ t=0 γtr(st, at) 2 )( T∏ t=0 π(at|st) µD(at|st) )2] = (1− γ)Es0∼ρ,a∼π
[ ∞∑ t=0 γtr(st, at) 2 ( T∏ t=0 π(at|st) µD(at|st) )2] (34)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ]
(35) which shows that lemma 1 also holds for off-policy returns with stationary distribution corrections.
B.2 DOUBLE SAMPLING FOR COMPUTING GRADIENTS OF VARIANCE
The gradient of the variance term often leads to the double sampling issue, thereby making it impractical to use. This issue has also been pointed out by several other works (A. & Ghavamzadeh, 2016; Castro et al., 2012; Chow et al., 2017), since the variance involves the squared of the objective function itself. Recall that we have:
VD(θ) = E(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]}2 (36)
The gradient of the variance term is therefore : ∇θVD(θ) = ∇θE(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − 2 · { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} · ∇θ { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} (37)
where equation 37 requires multiple samples to compute the expectations in the second term. To see why this is true, let us denote J(θ) = EdD(s,a) [ ωπ/D(s, a)︸ ︷︷ ︸ ·r(s, a)IS(ω,πθ) ] where we have IS(ω, πθ) as the returns in short form. The variance of the returns with the stationary state-action distribution corrections can therefore be written as :
VD(θ) = EdD(s,a) [ IS(ω, πθ)2 ] ︸ ︷︷ ︸
(a)
−EdD(s,a) [ IS(ω, πθ) ]2 ︸ ︷︷ ︸
(b)
(38)
We derive the gradient of each of the terms in (a) and (b) in equation 38 below. First, we find the gradient of the variance term w.r.t θ : ∇θEdD(s,a) [ IS(ω, πθ)2 ] = ∇θ ∑ s,a dD(s, a)IS(ω, πθ)2 = ∑ s,a dD(s, a)∇θIS(ω, πθ)2
= ∑ s,a dD(s, a) · 2 · IS(ω, πθ) · IS(ω, πθ) · ∇θ log πθ(a | s)
= 2 · ∑ s,a dD(s, a)IS(ω, πθ)2∇θ log πθ(a | s)
= 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s)
] (39)
Equation 39 interestingly shows that the variance of the returns w.r.t πθ has a form similar to the policy gradient term, except the critic estimate in this case is given by the importance corrected returns, since IS(ω, πθ) = [ωπ/D(s, a) · r(s, a)]. We further find the gradient of term (b) from equation 38. Finding the gradient of this second term w.r.t θ is therefore :
∇θEdD(s,a) [ IS(ω, πθ) ]2 = ∇θJ(θ)2 = 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (40)
Overall, the expression for the gradient of the variance term is therefore : ∇θVD(θ) = 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s) ] − 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (41)
The variance gradient in equation 41 is difficult to estimate in practice, since it involves both the gradient of the objective and the objective J(θ) itself. This is known to have the double sampling issue (Baird, 1995) which requires separate independent rollouts. Previously, (Castro et al., 2012) tackled the variance of the gradient term using simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992), where we can keep running estimates of both the return and the variance term, and use a two time scale algorithm for computing the gradient of the variance regularizer with per-step importance sampling corrections.
B.3 ALTERNATIVE DERIVATION : VARIANCE REGULARIZATION VIA FENCHEL DUALITY
In the derivation of our algorithm, we applied the Fenchel duality trick to the second term of the variance expression 25. An alternative way to derive the proposed algorithm would be to see what happens if we apply the Fenchel duality trick to both terms of the variance expression. This might be useful since equation 41 requires evaluating both the gradient terms and the actual objective J(θ), due to the analytical expression of the form ∇θJ(θ) · J(θ), hence suffering from a double sampling issue. In general, the Fenchel duality is given by :
x2 = max y (2xy − y2) (42) and applying Fenchel duality to both the terms, since they both involve squared terms, we get :
EdD(s,a) [ IS(ω, πθ)2 ] ≡ EdD(s,a) [ max y { 2 · IS(ω, πθ) · y(s, a)− y(s, a)2 }] = 2 ·max
y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]} (43) Similarly, applying Fenchel duality to the second (b) term we have :
EdD(s,a) [ IS(ω, πθ) ]2 = max
ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (44)
Overall, we therefore have the variance term, after applying Fenchel duality as follows, leading to an overall objective in the form maxymaxν VD(θ), which we can use as our variance regularizer
VD(θ) = 2 ·max y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]}
−max ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (45)
Using the variance of stationary distribution correction returns as a regularizer, we can find the gradient of the variance term w.r.t θ as follows, where the gradient terms dependent on the dual variables y and ν are 0.
∇θVD(θ) = 2 · ∇θEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 0− 2 · ∇θEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] + 0
= 2·EdD(s,a) [ IS(ω, πθ)·y(s, a)·∇θ log πθ(a | s) ] −2·EdD(s,a) [ IS(ω, πθ)·ν(s, a)·∇θ log πθ(a | s) ]
= 2 · EdD(s,a) [ IS(ω, πθ) · ∇θ log πθ(a | s) · { y(s, a)− ν(s, a) }] (46)
Note that from equation 46, the two terms in the gradient is almost equivalent, and the difference comes only from the difference between the two dual variables y(s, a) and ν(s, a). Note that our variance term also requires separately maximizing the dual variables, both of which has the following closed form updates :
∇νVD(θ) = −2 · ∇νEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] +∇νν2 = 0 (47)
Solving which exactly, leads to the closed form solution ν(s, a) = EdD(s,a) [ IS(ω, πθ) ] . Similarly,
we can also solve exactly using a closed form solution for the dual variables y, such that : ∇yVD(θ) = 2 · ∇yEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 2 · ∇yEdD(s,a) [ y(s, a)2 ] = 0 (48)
Solving which exactly also leads to the closed form solution, such that y(s, a) = 12 · IS(ω, πθ) = 1 2 · dπ(s,a) dµ(s,a)
· r(s, a). Note that the exact solutions for the two dual variables are similar to each other, where ν(s, a) is the expectation of the returns with stationary distribution corrections, whereas y(s, a) is only the return from a single rollout.
C APPENDIX : MONOTONIC PERFORMANCE IMPROVEMENT GUARANTEES
UNDER VARIANCE REGULARIZATION
We provide theoretical analysis and performance improvements bounds for our proposed variance constrained policy optimization approach. Following from (Kakade & Langford, 2002; Schulman et al., 2015; Achiam et al., 2017), we extend existing performance improvement guarantees based on the stationary state-action distributions instead of only considering the divergence between the current policy and old policy. We show that existing conservative updates in algorithms (Schulman et al., 2015) can be considered for both state visitation distributions and the action distributions, as similarly pointed by (Achiam et al., 2017). We can then adapt this for the variance constraints instead of the divergence constraints. According to the performance difference lemma (Kakade & Langford, 2002), we have that, for all policies π and π′ :
J(π′)− J(π) = Es∼dπ′ ,a∼π′ [A π(s, a)] (49)
which implies that when we maximize 49, it will lead to an improved policy π′ with policy improvement guarantees over the previous policy π. We can write the advantage function with variance augmented value functions as :
Aπλ = Q π λ(s, a)− V πλ (s) = Es′∼P [ r(s, a)− λ(r(s, a)− J(π))2 + γV πλ (s′)− V πλ (s) ] However, equation 49 is often difficult to maximize directly, since it additionally requires samples from π′ and dπ′ , and often a surrogate objective is instead proposed by (Kakade & Langford, 2002). Following (Schulman et al., 2015), we can therefore obtain a bound for the performance difference based on the variance regularized advantage function :
J(π′) ≥ J(π) + Es∼dπ(s),a∼π′(a|s) [ Aπλ(s, a) ] (50)
where we have the augmented rewards for the advantage function, and by following Fenchel duality for the variance, can avoid policy dependent reward functions. Otherwise, we have the augmented rewards for value functions as r̃(s, a) = r(s, a)− λ(r(s, a)− J(π))2. This however suggests that the performance difference does not hold without proper assumptions (Bisi et al., 2019). We can therefore obtain a monotonic improvement guarantee by considering the KL divergence between
policies : Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)] (51) which ignores the changes in the state distribution dπ′ due to the improved policy π′. (Schulman et al., 2015) optimizes the surrogate objectives Lπ(π′) while ensuring that the new policy π′ stays close to the current policy π, by imposing a KL constraint (Es∼dπ [DKL(π′(· | s)||π(· | s)] ≤ δ). The performance difference bound, based on the constraint between π and π′ as in TRPO (Schulman et al., 2015) is given by :
Lemma 4. The performance difference lemma in (Schulman et al., 2015), where α = DmaxTV = maxsDTV(π, π′)
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 (DmaxTV (π′||π))2 (52)
where = maxs,a |Aπ(s, a)|, which is usually denoted with α, where
The performance improvement bxound in (Schulman et al., 2015) can further be written in terms of the KL divergence by following the relationship between total divergence (TV) and KL, which follows from Pinsker’s inequality, DTV(p||q)2 ≤ DKL(p||q), to get the following improvement bound :
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 DKL(π′||π) (53)
We have a performance difference bound in terms of the state distribution shift dπ′ and dπ. This justifies that Lπ(π′) is a sensible lower bound to J(π′) as long as there is a total variation distance between dπ′ and dπ which ensures that the policies π′ and π stay close to each other. Finally, following from (Achiam et al., 2017), we obtain the following lower bound, which satisfies policy improvement guarantees :
J(π′) ≥ Lπ(π′)− 2γ π
1− γ Es∼dπ [DTV(π′(· | s)||π(· | s))] (54)
Equation 53 and 54 assumes that there is no state distribution shift between π′ and π. However, if we explicitly assume state distribution changes, dπ′ and dπ due to π′ and π respectively, then we have the following performance improvement bound :
Lemma 5. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (55) where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|
which can be further written in terms of the surrogate objective Lπ(π′) as : J(π′) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ) = Lπ(π′)− πDTV(dπ′ ||dπ) (56)
C.1 PROOF OF THEOREM 1 : POLICY IMPROVEMENT BOUND WITH VARIANCE REGULARIZATION
Proof. We provide derivation for theorem 1. Recall that for all policies π′ and π, and corresponding state visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections
J(π′)− J(π) ≥ Es∼dπ,a∼π′ [ Aπ(s, a) ] − Vars∼dπ,a∼π [ f(s, a) ] (57)
where f(s, a) is the dual function class, for the divergence between dπ′(s, a) and dπ(s, a) Following from Pinsker’s inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ)
≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− π √ DKL(dπ′ ||dπ) (58)
Following from (Schulman et al., 2015), we can alternately write this follows, where we further apply the variational form of TV J(π′) ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [ DTV(dπ′ ||dπ)2 ] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [( max f {Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)]}
)2] ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Es∼dπ
[( Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)] )2] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f
{( Es∼dπ,a∼π[f(s, a)]− Es∼dπ,a∼π[Es∼dπ,a∼π[f(s, a)]] )2} = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Vars∼dπ,a∼π
[ f(s, a) ] (59)
Therefore the policy improvement bound depends on maximizing the variational representation f(s, a) of the f-divergence to guaranetee improvements from J(π) to J(π′). This therefore leads to the stated result in theorem 1.
D APPENDIX : LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
D.1 PROOF OF LEMMA 3
Recalling lemma 3 which states that, the proof of this follows from (Metelli et al., 2018). We extend this for marginalized importance weighting, and include here for completeness. Note that compared to importance weighting, which leads to an unbiased estimator as in (Metelli et al., 2018), correcting for the state-action occupancy measures leads to a biased estimator, due to the approximation ω̂π/D. However, for our analysis, we only require to show a lower bound objective, and therefore do not provide any bias variance analysis as in off-policy evaluation.
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (60)
Proof. Assuming that state action samples are drawn i.i.d from the dataset D, we can write : Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N Var(s1,a1)∼dD(s,a) [ dπ(s1, a1 dD(s1, a1) · r(s1, a1) ]
≤ 1 N E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2]
≤ 1 N ||r||2∞E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2] = 1 N ||r||2∞F2(dπ||dD) (61)
D.2 PROOF OF THEOREM 2:
First let us recall the stated theorem 2. By constraining the off-policy optimization problem with variance constraints, we have the following lower bound to the optimization objective with stationary state-action distribution corrections
J(π) ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) r(s, a)]− √ 1− δ δ Var(s,a)∼dµ(s,a)[ dπ(s, a) dD(s, a) r(s, a)] (62)
Proof. The proof for the lower bound objective can be obtained as follows. We first define a relationship between the variance and the α-divergence with α = 2, as also similarly noted in (Metelli et al., 2018). Given we have batch samples D, and denoting the state-action distribution correction with ωπ/D(s, a), we can write from lemma 3 :
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (63)
where the per-step estimator with state-action distribution corrections is given by ωπ/D(s, a) · r(s, a). Here, the reward function r(s, a) is a bounded function, and for any N > 0 the variance of the
per-step reward estimator with distribution corrections can be upper bounded by the Renyi-divergence (α = 2). Finally, following from (Metelli et al., 2018) and using Cantelli’s inequality, we have with probability at least 1− δ where 0 < δ < 1 :
Pr ( ωπ/D − J(π) ≥ λ ) ≤ 1
1 + λ 2 Var(s,a)∼dD(s,a)[ωπ/D(s,a)·r(s,a)] (64)
and by using δ = 1 1+ λ 2
Var(s,a)∼dD(s,a) [ωπ/D(s,a)·r(s,a)]
we get that with probability at least 1− δ, we have:
J(π) = E(s,a)∼dπ(s,a) ≥ E(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)]− √ 1− δ δ
Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (65)
where we can further replace the variance term with α = 2 for the Renyi divergence to conclude the proof for the above theorem. We can further write the lower bound for for α-Renyi divergence, following the relation between variance and Renyi-divergence for α = 2 as :
J(π) = E(s,a)∼dπ(s,a)[r(s, a)] ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) · r(s, a)]− ||r||∞
√ (1− δ)d2(dπ||dD)
δN This hints at the similarity between our proposed variance regularized objective with that of other related works including AlgaeDICE (Nachum et al., 2019b) which uses a f-divergence D f (dπ||dD) between stationary distributions.
E APPENDIX : ADDITIONAL EXPERIMENTAL RESULTS
E.1 EXPERIMENTAL ABLATION STUDIES
In this section, we present additional results using state-action experience replay weightings on existing offline algorithms, and analysing the significance of our variance regularizer on likelihood corrected offline algorithms. Denoting ω(s, a) for the importance weighting of state-action occupancy measures based on samples in the experience replay buffer, we can modify existing offline algorithms to account for state-action distribution ratios.
The ablation experimental results using the Hopper control benchmark are summarized in figure 2. The same base BCQ algorithm is used with a modified objective for BCQ (Fujimoto et al., 2019) where the results for applying off-policy importance weights are denoted as “BCQ+I.W.”. We employ the same technique to obtain ω(s, a) for both the baseline and for adding variance regularization as described. The results suggest that adding the proposed per-step variance regularization scheme significantly outperforms just importance weighting the expected rewards for off-policy policy learning.
E.2 EXPERIMENTAL RESULTS IN CORRUPTED NOISE SETTINGS
We additionally consider a setting where the batch data is collected from a noisy environment, i.e, in settings with corrupted rewards, r → r + , where ∼ N (0, 1). Experimental results are presented in figures 1, 3. From our results, we note that using OVR on top of BCQ (Fujimoto et al., 2019), we can achieve significantly better performance with variance minimization, especially when the agent is given sub-optimal demonstrations. We denote it as medium (when the dataset was collected by a half trained SAC policy) or a mixed behaviour logging setting (when the data logging policy is a mixture of random and SAC policy). This is also useful for practical scalability, since often data collection is
expensive from an expert policy. We add noise to the dataset, to examine the significance of OVR under a noisy corrupted dataset setting.
E.3 EXPERIMENTAL RESULTS ON SAFETY BENCHMARK TASKS
Safety Benchmarks for Variance as Risk : We additionally consider safety benchmarks for control tasks, to analyse the significance of variance regularizer as a risk constraint in offline policy optimization algorithms. Our results are summarized in table 3.
E.4 DISCUSSIONS ON OFFLINE OFF-POLICY OPTIMIZATION WITH STATE-ACTION DISTRIBUTION RATIOS
In this section, we include several alternatives by which we can compute the stationary state-action distribution ratio, borrowing from recent works (Uehara & Jiang, 2019; Nachum et al., 2019a).
Off-Policy Optimization with Minimax Weight Learning (MWL) : We discuss other possible ways of optimizing the batch off-policy optimization objective while also estimating the state-action density ratio. Following from (Uehara & Jiang, 2019) we further modify the off-policy optimization part of the objective J(θ) in L(θ, λ) as a min-max objective, consisting of weight learning ωπ/D
Table 3: Results on the Safety-Gym environments Ray et al.. We report the mean and S.D. of episodic returns and costs over five random seeds and 1 million timesteps. The goal of the agent is to maximize the episodic return, while minimizing the cost incurred.
PointGoal1 PointGoal2 Reward Cost Reward Cost
BCQ 43.1 ± 0.3 137.0 ± 3.6 32.7± 0.7 468.2 ± 9.1 BCQ+OVR 44.2 ± 0.3 127.1 ± 4.0 33.2 ± 0.7 453.9 ± 7.3
PointButton1 PointButton2 Reward Cost Reward Cost
BCQ 30.9 ± 2.2 330.8 ± 8.3 18.1 ± 1.1 321.6 ± 4.1 BCQ+OVR 30.7 ± 2.3 321.5 ± 6.8 19.6 ± 1.0 305.7 ± 6.1
and optimizing the resulting objective J(θ, ω). We further propose an overall policy optimization objective, where a single objective can be used for estimating the distribution ratio, evaluating the critic and optimizing the resulting objective. We can write the off-policy optimization objective with its equivalent starting state formulation, such that we have :
EdD(s,a) [ ωπθ/D(s, a) · r(s, a) ] = (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (66)
Furthermore, following Bellman equation, we expect to have E[r(s, a)] = E[Qπ(s, a)−γQπ(s′, a′)] EdD(s,a) [ ωπθ/D(s, a)·{Q π(s, a)−γQπ(s′, a′)} ] = (1−γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (67)
We can therefore write the overall objective as : J(ω, πθ, Q) = EdD(s,a) [ ωπθ/D(s, a) · {Q π(s, a)− γQπ(s′, a′)} ]
− (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (68)
This is similar to the MWL objective in (Uehara & Jiang, 2019) except we instead consider the bias reduced estimator, such that accurate estimates of Q or ω will lead to reduced bias of the value function estimation. Furthermore, note that in the first part of the objective J(πθ, ω,Q)2, we can further use entropy regularization for smoothing the objective, since instead ofQπ(s′, a′) in the target, we can replace it with a log-sum-exp and considering the conjugate of the entropy regularization term, similar to SBEED (Dai et al., 2018). This would therefore give the first part of the objective as an overall min-max optimization problem :
J(ω, πθ) = Edµ(s,a) [ ωπθ/D(s, a) · {r(s, a) + γQ π(s′, a′) + τ log π(a | s)−Qπ(s, a)} ]
+ (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (69)
such that from our overall constrained optimization objective for maximizing θ, we have turned it into a min-max objective, for estimating the density ratios, estimating the value function and maximizing the policies
ω∗π/D, Q ∗, π∗ = argmin
ω,Q argmax π J(πθ, ω,Q)
2 (70)
where the fixed point solution for the density ratio can be solved by minimizing the objective :
ω∗π/D = argmin ω
L(ωπ/D, Q)2 = Edµ(s,a) [ {γω(s, a) ·Qπ(s′, a′)− ω(s, a)Qπ(s, a)}+
(1− γ)Eβ(s,a)Qπ(s0, a0) ] (71)
DualDICE : In contrast to MWL (Uehara & Jiang, 2019), DualDICE (Nachum et al., 2019a) introduces dual variables through the change of variables trick, and minimizes the Bellman residual of the dual variables ν(s, a) to estimate the ratio, such that : ν∗(s, a)− Bπν∗(s, a) = ωπ/D(s, a) (72) the solution to which can be achieved by optimizing the following objective
min ν L(ν) = 1 2 EdD
[ (ν − Bπν)(s, a)2 ] − (1− γ)Es0,a0∼β(s,a) [ ν(s0, a0) ] (73)
Minimizing Divergence for Density Ratio Estimation : The distribution ratio can be estimated using an objective similar to GANs (Goodfellow et al., 2014; Ho & Ermon, 2016), as also similarly
proposed in (Kostrikov et al., 2019).
max h G(h) = E(s,a)∼dD
[ log h(s, a) ] + E(s,a)∼dπ [ log(1− h(s, a)) ] (74)
where h is the discriminator class, discriminating between samples from dD and dπ. The optimal discriminator satisfies :
log h∗(s, a)− log(1− h∗(s, a)) = log dD(s, a) dπ(s, a)
(75)
The optimal solution of the discriminator is therefore equivalent to minimizing the divergence between dπ and dD, since the KL divergence is given by :
−DKL(dπ||dD) = E(s,a)∼dπ [ log dD(s, a)
dπ(s, a)
] (76)
Additionally, using the Donsker-Varadhan representation, we can further write the KL divergence term as :
−DKL(dπ||dD) = min x
logE(s,a)∼dD [ expx(s, a) ] − E(s,a)∼dπ [ x(s, a) ] (77)
such that now, instead of the discriminator class h, we learn the function class x, the optimal solution to which is equivalent to the distribution ratio plus a constant
x∗(s, a) = log dπ(s, a)
dD(s, a) (78)
However, note that both the GANs like objective | 1. What is the focus of the paper regarding offline policy optimization?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of theoretical motivation and algorithmic novelty?
3. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
4. What are the concerns regarding the technical steps and notation used in the paper?
5. Are there any suggestions for improving the empirical results and adding important baselines and ablations? | Review | Review
Summary:
This paper proposes a novel algorithm for offline policy optimization. The main idea is to prevent overestimation bias by regularizing against the variance of the importance weighted value estimate. There are two key modifications: (1) using an importance weight from the stationary distribution and (2) using Fenchel duality to introduce a min-max problem to avoid double sampling when estimating the gradient of the variance regularization term. The theory section motivates the use of variance regularization and the experiments show improvements over BCQ when adding the proposed variance regularization algorithm.
Strengths:
The paper provides theoretical motivation for variance regularization. Theorem 2 demonstrates that using variance regularization can be seen as optimizing a lower bound on the true value of a policy.
The algorithm provides a novel way to implement variance regularization. By introducing the
ν
variables, the paper proposes a way to get around double sampling issues in estimating the gradient of the regularizer.
Weaknesses:
There are several technical steps that I am not convinced by. I am not sure if these are mistakes or just statements that require more explanation:
In the algorithm,
ν
is defined by
r
~
and
r
~
is defined by
η
in equation (7). This definition seems somewhat circular and the paper never clearly explains why this should work.
The continual updating of
r
~
seems to make the problem nonstationary for the policy improvement subroutine that depends on the current iterate of the distribution ratio
ω
ψ
. The argument made in a remark on page 4 does not seem to sufficiently resolve this issue since even though
ν
has a closed form, that closed form depends on the distribution ratio which depends on the current policy.
The variance term that shows up in Theorem 1 does not seem to be what is being estimated by
V
D
in the algorithm. In light of this, what is the point of Theorem 1 vis a vis the proposed algorithm?
Theorem 2 shows that using the square root of the variance as a regularizer provides a lower bound, but the algorithm uses the variance. What is the explanation for this mismatch?
Notation is somewhat confusing and sloppy. For example,
J
is defined in several different ways, first in equation (1) and then in equation (3). This makes the rest of the paper confusing since it is sometimes unclear which
J
is being referred to. Another example is that
V
D
is defined as a function of
π
in equation (2) and then a function of
ω
,
π
in equation (3). Equation (4) just refers to
V
when I think it means
V
D
. Also in equation (4) the LHS has no
s
,
a
variables but the first two terms on the RHS are a function of
s
,
a
. Where do these
s
,
a
come from? Is there an expectation missing? In equation (5) it is not clear which
ω
is being referred to. Equation (6) overloads
J
once more. Equation (6) could also use a better explanation as to why this is the dual form of (5) rather than just asserting it. The definitions of
ϵ
π
,
L
π
are not included in Lemma 2, but are instead in the following paragraph. In equation (12)
d
π
is denoted by
β
π
′
for no apparent reason. In equation (13),
f
is supposedly a dual class of functions, is there a sup missing? In equation (15),
ω
^
π
/
D
is never explicitly defined. This sort of sloppiness pervades the paper and makes it often difficult to understand.
The writing is unclear. One major issue is that lemmas are presented with no context or direction. It would be helpful to preface each section with an explanation of where the section is going before presenting technical lemmas. This is a problem for both Lemma 2 and Lemma 3 where it is unclear to the reader what the point of the lemma is until much later in the section. This is also a problem for Lemma 1 which has it's own subsection that does not seem to make any clear claim to connect to the thesis of the paper. The lemmas may be better suited in the appendix rather than the main text of the paper or just require more explanation.
Empirical results are not very convincing. The plots presented in the experimental section seem to show a slight but not large or consistent advantage for the proposed method. Perhaps more worrying, the only type of result reported are the final performance of the algorithms. There is no empirical indication as to how the algorithm is working or whether the variance regulation is indeed having the desired effect of reducing overestimation. Moreover, there is no indication that the Fenchel duality tricks are doing anything useful empirically.
Important ablations and baselines are missing. The authors do not include conservative Q learning (Kumar et al., 2020) as a baseline saying that it is too recent to compare. However, this paper came out in June and according to the ICLR reviewer guidelines we are only meant to consider work released in August or later as contemporaneous. So, CQL ought to be included as a baseline and I think generally outperforms the proposed methods. Additionally, while there is one ablation included in the appendix (adding importance weighting to BCQ), more ablations are needed. Specifically, since the main algorithmic innovation of the paper not just using variance regularization, but computing gradients using the Fenchel duality max-min procedure, there should be ablations showing whether this part of the algorithm is indeed necessary and useful.
Recommendation:
I recommend rejection and gave the paper a score of 4. While there may be a good idea in there, I do not think the paper is ready to publish. A more clear and careful exposition of the algorithm as well as more rigorous theory and experiments are needed.
Additional feedback:
Typos:
The first sentence of the intro has a space before the period.
The references on the top of page 2 seem to be incorrect. "A." is the first initial of the first author, not their last name.
Throughout the paper there is usually a space preceding each colon. There should be no spaces before each colon.
After equation two "a as" should be "as a "
In the first sentence of section 3.4 "leads" should be "lead"
The paper sometimes refers to the proposed variance regularization as a "framework" (e.g. in the first sentence of the conclusion). It is not a framework, it is an algorithm. |
ICLR | Title
Offline Policy Optimization with Variance Regularization
Abstract
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing algorithms.
1 INTRODUCTION
Offline batch reinforcement learning (RL) algoithms are key towards scaling up RL for real world applications, such as robotics (Levine et al., 2016) and medical problems . This is because offline RL provides the appealing ability for agents to learn from fixed datasets, similar to supervised learning, avoiding continual interaction with the environment, which could be problematic for safety and feasibility reasons. However, significant mismatch between the fixed collected data and the policy that the agent is considering can lead to high variance of value function estimates, a problem encountered by most off-policy RL algorithms (Precup et al., 2000). A complementary problem is that the value function can become overly optimistic in areas of state space that are outside the visited batch, leading the agent in data regions where its behavior is poor Fujimoto et al. (2019). Recently there has been some progress in offline RL (Kumar et al., 2019; Wu et al., 2019b; Fujimoto et al., 2019), trying to tackle both of these problems.
In this work, we study the problem of offline policy optimization with variance minimization. To avoid overly optimistic value function estimates, we propose to learn value functions under variance constraints, leading to a pessimistic estimation, which can significantly help offline RL algorithms, especially under large distribution mismatch. We propose a framework for variance minimization in offline RL, such that the obtained estimates can be used to regularize the value function and enable more stable learning under different off-policy distributions.
We develop a novel approach for variance regularized offline actor-critic algorithms, which we call Offline Variance Regularizer (OVR). The key idea of OVR is to constrain the policy improvement step via variance regularized value function estimates. Our algorithmic framework avoids the double sampling issue that arises when computing gradients of variance estimates, by instead considering the variance of stationary distribution corrections with per-step rewards, and using the Fenchel transformation (Boyd & Vandenberghe, 2004) to formulate a minimax optimization objective. This allows minimizing variance constraints by instead optimizing dual variables, resulting in simply an augmented reward objective for variance regularized value functions.
We show that even with variance constraints, we can ensure policy improvement guarantees, where the regularized value function leads to a lower bound on the true value function, which mitigates the usual overestimation problems in batch RL The use of Fenchel duality in computing the variance allows us to avoid double sampling, which has been a major bottleneck in scaling up variance-constrained
actor-critic algorithms in prior work A. & Ghavamzadeh (2016); A. & Fu (2018). Practically, our algorithm is easy to implement, since it simply involves augmenting the rewards with the dual variables only, such that the regularized value function can be implemented on top of any existing offline policy optimization algorithms. We evaluate our algorithm on existing offline benchmark tasks based on continuous control domains. Our empirical results demonstrate that the proposed variance regularization approach is particularly useful when the batch dataset is gathered at random, or when it is very different from the data distributions encountered during training.
2 PRELIMINARIES AND BACKGROUND
We consider an infinite horizon MDP as (S,A,P, γ) where S is the set of states, A is the set of actions, P is the transition dynamics and γ is the discount factor. The goal of reinforcement learning is to maximize the expected return J (π) = Es∼dβ [V π(s)], where V π(s) is the value function V π(s) = E[ ∑∞ t=0 γ
tr(st, at) | s0 = s], and β is the initial state distribution. Considering parameterized policies πθ(a|s), the goal is maximize the returns by following the policy gradient (Sutton et al., 1999), based on the performance metric defined as :
J(πθ) = Es0∼ρ,a0∼π(s0) [ Qπθ (s0, a0) ] = E(s,a)∼dπθ (s,a) [ r(s, a) ] (1)
where Qπ(s, a) is the state-action value function, since V π(s) = ∑ a π(a|s)Qπ(s, a). The policy optimization objective can be equivalently written in terms of the normalized discounted occupancy measure under the current policy πθ, where dπ(s, a) is the state-action occupancy measure, such that the normalized state-action visitation distribution under policy π is defined as : dπ(s, a) = (1 − γ) ∑∞ t=0 γ
tP (st = s, at = a|s0 ∼ β, a ∼ π(s0)). The equality in equation 1 holds and can be equivalently written based on the linear programming (LP) formulation in RL (see (Puterman, 1994; Nachum & Dai, 2020) for more details). In this work, we consider the off-policy learning problem under a fixed dataset D which contains s, a, r, s′ tuples under a known behaviour policy µ(a|s). Under the off-policy setting, importance sampling (Precup et al., 2000) is often used to reweight the trajectory under the behaviour data collecting policy, such as to get unbiased estimates of the expected returns. At each time step, the importance sampling correction π(at|st)µ(at|st) is used to compute the expected return under the entire trajectory as
J(π) = (1 − γ)E(s,a)∼dµ(s,a)[ ∑T t=0 γ tr(st, at) (∏T t=1 π(at|st) µ(at|st ) ]. Recent works (Fujimoto et al., 2019) have demonstrated that instead of importance sampling corrections, maximizing value functions directly for deterministic or reparameterized policy gradients (Lillicrap et al., 2016; Fujimoto et al., 2018) allows learning under fixed datasets, by addressing the over-estimation problem, by maximizing the objectives of the form maxθ Es∼D [ Qπθ (s, πθ(s) ] .
3 VARIANCE REGULARIZATION VIA DUALITY IN OFFLINE POLICY OPTIMIZATION
In this section, we first present our approach based on variance of stationary distribution corrections, compared to importance re-weighting of episodic returns in section 3.1. We then present a derivation of our approach based on Fenchel duality on the variance, to avoid the double sampling issue, leading to a variance regularized offline optimization objective in section 3.2. Finally, we present our algorithm in 1, where the proposed regularizer can be used in any existing offline RL algorithm.
3.1 VARIANCE OF REWARDS WITH STATIONARY DISTRIBUTION CORRECTIONS
In this work, we consider the variance of rewards under occupancy measures in offline policy optimization. Let us denote the returns as Dπ = ∑T t=0 γ
tr(st, at), such that the value function is V π = Eπ[Dπ]. The 1-step importance sampling ratio is ρt = π(at|st)µ(at|st) , and the T-steps ratio can be denoted ρ1:T = ∏T t=1 ρt. Considering per-decision importance sampling (PDIS) (Precup et al.,
2000), the returns can be similarly written as Dπ = ∑T t=0 γ
trtρ0:t. The variance of episodic returns, which we denote by VP(π), with off-policy importance sampling corrections can be written as : VP(π) = Es∼β,a∼µ(·|s),s′∼P(·|s,a) [( Dπ(s, a)− J(π) )2] .
Instead of importance sampling, several recent works have instead proposed for marginalized importance sampling with stationary state-action distribution corrections (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2020; Uehara & Jiang, 2019), which can lead to lower variance estimators at the cost of introducing bias. Denoting the stationary distribution ratios as ω(s, a) = dπ(s,a)dµ(s,a) , the returns can be written as Wπ(s, a) = ω(s, a)r(s, a). The variance of marginalized IS is :
VD(π) = E(s,a)∼dµ(s,a) [( Wπ(s, a)− J(π) )2] = E(s,a)∼dµ(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dµ(s,a) [ Wπ(s, a) ]2 (2)
Our key contribution is to first consider the variance of marginalized IS VD(π) itself a as risk constraints, in the offline batch optimization setting. We show that constraining the offline policy optimization objective with variance of marginalized IS, and using the Fenchel-Legendre transformation on VD(π) can help avoid the well-known double sampling issue in variance risk constrained RL (for more details on how to compute the gradient of the variance term, see appendix B). We emphasize that the variance here is solely based on returns with occupancy measures, and we do not consider the variance due to the inherent stochasticity of the MDP dynamics.
3.2 VARIANCE REGULARIZED OFFLINE MAX-RETURN OBJECTIVE
We consider the variance regularized off-policy max return objective with stationary distribution corrections ωπ/D (which we denote ω for short for clarity) in the offline fixed dataset D setting:
max πθ
J(πθ) := Es∼D [ Qπθ (s, πθ(s)) ] − λVD(ω, πθ) (3)
where λ ≥ 0 allows for the trade-off between offline policy optimization and variance regularization (or equivalently variance risk minimization). The max-return objective under Qπθ (s, a) has been considered in prior works in offline policy optimization (Fujimoto et al., 2019; Kumar et al., 2019). We show that this form of regularizer encourages variance minimization in offline policy optimization, especially when there is a large data distribution mismatch between the fixed dataset D and induced data distribution under policy πθ.
3.3 VARIANCE REGULARIZATION VIA FENCHEL DUALITY
At first, equation 3 seems to be difficult to optimize, especially for minimizing the variance regularization w.r.t θ. This is because finding the gradient of V(ω, πθ) would lead to the double sampling issue since it contains the squared of the expectation term. The key contribution of OVR is to use the Fenchel duality trick on the second term of the variance expression in equation 2, for regularizing policy optimization objective with variance of marginalized importance sampling. Applying Fenchel duality, x2 = maxy(2xy − y2), to the second term of variance expression, we can transform the variance minimization problem into an equivalent maximization problem, by introducing the dual variables ν(s, a). We have the Fenchel conjugate of the variance term as :
V(ω, πθ) = max ν
{ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + E(s,a)∼dD
[ ω(s, a)r(s, a)2 ]} = max
ν E(s,a)∼dD
[ − 1
2 ν(s, a)2 + ν(s, a)ω(s, a)r(s, a) + ω(s, a)r(s, a)2 ] (4) Regularizing the policy optimization objective with variance under the Fenchel transformation, we therefore have the overall max-min optimization objective, explicitly written as :
max θ min ν J(πθ, ν) := Es∼D
[ Qπθ (s, πθ(s)) ] −λE(s,a)∼dD [( − 1
2 ν2+ν ·ω ·r+ω ·r2
) (s, a) ] (5)
3.4 AUGMENTED REWARD OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we explain the key steps that leads to the policy improvement step being an augmented variance regularized reward objective. The variance minimization step involves estimating the stationary distribution ration (Nachum et al., 2019a), and then simply computing the closed form solution for the dual variables. Fixing dual variables ν, to update πθ, note that this leads to a standard maximum return objective in the dual form, which can be equivalently solved in the primal form,
using augmented rewards. This is because we can write the above above in the dual form as : J(πθ, ν, ω) := E(s,a)∼dD(s,a) [ ω(s, a) · r(s, a)− λ ( − 1
2 ν2 + ν · ω · r + ω · r2
) (s, a) ] = E(s,a)∼dD(s,a) [ ω(s, a) · ( r − λ · ν · r − λ · r2 ) (s, a) + λ
2 ν(s, a)2 ] = E(s,a)∼dD(s,a) [ ω(s, a) · r̃(s, a) + λ
2 ν(s, a)2
] (6)
where we denote the augmented rewards as : r̃(s, a) ≡ [r − λ · ν · r − λ · r2](s, a) (7)
The policy improvement step can either be achieved by directly solving equation 6 or by considering the primal form of the objective with respect to Qπθ (s, πθ) as in (Fujimoto et al., 2019; Kumar et al., 2019). However, solving equation 6 directly can be troublesome, since the policy gradient step involves findinding the gradient w.r.t ω(s, a) = dπθ (s,a)dD(s,a) too, where the distribution ratio depends on dπθ (s, a). This means that the gradient w.r.t θ would require finding the gradient w.r.t to the normalized discounted occupancy measure, ie,∇θdπθ (s). Instead, it is therefore easier to consider the augmented reward objective, using r̃(s, a) as in equation 7 in any existing offline policy optimization algorithm, where we have the variance regularized value function Q̃πθ (s, a).
Note that as highlighted in (Sobel, 1982), the variance of returns follows a Bellman-like equation. Following this, (Bisi et al., 2019) also pointed to a Bellman-like solution for variance w.r.t occupancy measures. Considering variance of the form in equation 2, and the Bellman-like equation for variance, we can write the variance recursively as a Bellman equation:
VπD(s, a) = ( r(s, a)− J(π) )2 + γEs′∼P,a′∼π′(·|s′) [ VπD(s′, a′) ] (8)
Since in our objective, we augment the policy improvement step with the variance regularization term, we can write the augmented value function as Qπλ(s, a) := Q
π(s, a)− λVπD(s, a). This suggests we can modify existing policy optimization algorithms with augmented rewards on value function.
Remark : Applying Fenchel transformation to the variance regularized objective, however, at first glance, seems to make the augmented rewards dependent on the policy itself, since r̃(s, a) depends on the dual variables ν(s, a) as well. This can make the rewards non-stationary, thereby the policy maximization step cannot be solved directly via the maximum return objective. However, as we discuss next, the dual variables for minimizing the variance term has a closed form solution ν(s, a), and thereby does not lead to any non-stationarity in the rewards, due to the alternating minimization and maximization steps.
Variance Minimization Step : Fixing the policy πθ, the dual variables ν can be obtained using closed form solution given by ν(s, a) = ω(s, a) · r̃(s, a). Note that directly optimizing for the target policies using batch data, however, requires a fixed point estimate of the stationary distribution corrections, which can be achieved using existing algorithms (Nachum et al., 2019a; Liu et al., 2018). Solving the optimization objective additionally requires estimating the state-action distribution ratio, ω(s, a) = dπ(s,a)dD(s,a) . Recently, several works have proposed estimating the stationary distribution ratio, mostly for the off-policy evaluation case in infinite horizon setting (Zhang et al., 2020; Uehara & Jiang, 2019). We include a detailed discussion of this in appendix E.4.
Algorithm : Our proposed variance regularization approach with returns under stationary distribution corrections for offline optimization can be built on top of any existing batch off-policy optimization algorithms. We summarize our contributions in Algorithm 1. Implementing our algorithm requires estimating the state-action distribution ratio, followed by the closed form estimate of the dual variable ν. The augmented stationary reward with the dual variables can then be used to compute the regularized value function Qπλ(s, a). The policy improvement step involves maximizing the variance regularized value function, e.g with BCQ (Fujimoto et al., 2019).
4 THEORETICAL ANALYSIS
In this section, we provide theoretical analysis of offline policy optimization algorithms in terms of policy improvement guarantees under fixed dataset D. Following then, we demonstrate that using the variance regularizer leads to a lower bound for our policy optimization objective, which leads to a pessimistic exploitation approach for offline algorithms.
Algorithm 1 Offline Variance Regularizer Initialize critic Qφ, policy πθ, network ωψ and regularization weighting λ; learning rate η for t = 1 to T do
Estimate distribution ratio ωψ(s, a) using any existing DICE algorithm Estimate the dual variable ν(s, a) = ωψ(s, a) · r̃(s, a) Calculate augmented rewards r̃(s, a) using equation 7 Policy improvement step using any offline policy optimization algorithm with augmented rewards r̃(s, a) : θt = θt−1 + η∇θJ(θ, φ, ψ, ν)
end for
4.1 VARIANCE OF MARGINALIZED IMPORTANCE SAMPLING AND IMPORTANCE SAMPLING
We first show in lemma 1 that the variance of rewards under stationary distribution corrections can similarly be upper bounded based on the variance of importance sampling corrections. We emphasize that in the off-policy setting under distribution corrections, the variance is due to the estimation of the density ratio compared to the importance sampling corrections. Lemma 1. The following inequality holds between the variance of per-step rewards under stationary distribution corrections, denoted by VD(π) and the variance of episodic returns with importance sampling corrections VP(π)
VP(π) ≤ VD(π) (1− γ)2
(9)
The proof for this and discussions on the variance of episodic returns compared to per-step rewards under occupancy measures is provided in the appendix B.1.
4.2 POLICY IMPROVEMENT BOUND UNDER VARIANCE REGULARIZATION
In this section, we establish performance improvement guarantees (Kakade & Langford, 2002) for variance regularized value function for policy optimization. Let us first recall that the performance improvement can be written in terms of the total variation DTV divergence between state distributions (Touati et al., 2020) (for more discussions on the performance bounds, see appendix C) Lemma 2. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (10)
where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|, and Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]. For detailed proof and discussions, see appendix C. Instead of considering the divergence between state visitation distributions, consider having access to both state-action samples generated from the environment. To avoid importance sampling corrections we can further considers the bound on the objective based on state-action visitation distributions, where we have an upper bound following from (Nguyen et al., 2010) : DTV(dπ′(s)||dπ(s)) ≤ DTV(dπ′(s, a)||dπ(s, a)). Following Pinsker’s inequality, we have: J(π′) ≥ J(π)+Es∼dπ(s),a∼π′(|s) [ Aπ(s, a) ] − πE(s,a)∼dπ(s,a) [√ DKL(dπ′(s, a)||dπ(s, a)) ] (11)
Furthermore, we can exploit the relation between KL, total variation (TV) and variance through the variational representation of divergence measures. Recall that the total divergence between P and Q distributions is given by : DTV(p, q) = 12 ∑ x |p(x)−q(x)|. We can use the variational representation of the divergence measure. Denoting dπ(s, a) = βπ′(s, a), we have
DTV(βπ′ ||βπ) = supf :S×A→R [ E(s,a)∼βπ′ [f(s, a)]− E(s,a)∼β(s,a)[φ ∗ ◦ f(s, a)] ]
(12)
where φ∗ is the convex conjugate of φ and f is the dual function class based on the variational representation of the divergence. Similar relations with the variational representations of f-divergences have also been considered in (Nachum et al., 2019b; Touati et al., 2020). We can finally obtain a bound for the policy improvement following this relation, in terms of the per-step variance: Theorem 1. For all policies π and π′, and the corresponding state-action visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of rewards under state-action occupancy measures.
J(π′)− J(π) ≥ Es∼dπ(s),a∼π′(a|s)[A π(s, a)]− Var(s,a)∼dπ(s,a)
[ f(s, a) ] (13)
where f(s, a) is the dual function class from the variational representation of variance.
Proof. For detailed proof, see appendix C.1.
4.3 LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
In this section, we show that augmenting the policy optimization objective with a variance regularizer leads to a lower bound to the original optimization objectiven J(πθ). Following from (Metelli et al., 2018), we first note that the variance of marginalized importance weighting with distribution corrections can be written in terms of the α−Renyi divergence. Let p and q be two probability measures, such that the Renyi divergence is Fα = 1α log ∑ x q(x) ( p(x) q(x) )α . When α = 1, this leads to the well-known KL divergence F1(p||q) = FKL(p||q). Let us denote the state-action occupancy measures under π and dataset D as dπ and dD. The variance of state-action distribution ratios is Var(s,a)∼dD(s,a)[ωπ/D(s, a)]. When α = 2 for the Renyi divergence, we have : Var(s,a)∼dD(s,a)[ωπ/D(s, a)] = F2(dπ||dD)− 1 (14) Following from (Metelli et al., 2018), and extending results from importance sampling ρ to marginalized importance sampling ωπ/D, we provide the following result that bounds the variance of the approximated density ratio ω̂π/D in terms of the Renyi divergence :
Lemma 3. Assuming that the rewards of the MDP are bounded by a finite constant, ||r||∞ ≤ Rmax. Given random variable samples (s, a) ∼ dD(s, a) from dataset D, for any N > 0, the variance of marginalized importance weighting can be upper bounded as :
Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N ||r||2∞F2(dπ||dD) (15)
See appendix D.1 for more details. Following this, our goal is to derive a lower bound objective to our off-policy optimization problem. Concentration inequalities has previously been studied for both off-policy evaluation (Thomas et al., 2015a) and optimization (Thomas et al., 2015b). In our case, we can adapt the concentration bound derived from Cantelli’s ineqaulity and derive the following result based on variance of marginalized importance sampling. Under state-action distribution corrections, we have the following lower bound to the off-policy policy optimization objective with stationary state-action distribution corrections
Theorem 2. Given state-action occupancy measures dπ and dD, and assuming bounded reward functions, for any 0 < δ ≤ 1 and N > 0, we have with probability at least 1− δ that :
J(π) ≥ E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] − √ 1− δ δ Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (16)
Equation 16 shows the lower bound policy optimization objective under risk-sensitive variance constraints. The key to our derivation in equation 16 of theorem 2 shows that given off-policy batch data collected with behaviour policy µ(a|s), we are indeed optimizing a lower bound to the policy optimization objective, which is regularized with a variance term to minimize the variance in batch off-policy learning.
5 EXPERIMENTAL RESULTS ON BENCHMARK OFFLINE CONTROL TASKS
Experimental Setup : We demonstrate the significance of variance regularizer on a range of continuous control domains (Todorov et al., 2012) based on fixed offline datasets from (Fu et al., 2020), which is a standard benchmark for offline algorithms. To demonstrate the significance of our variance regularizer OVR, we mainly use it on top of the BCQ algorithm and compare it with other existing baselines, using the benchmark D4RL (Fu et al., 2020) offline datasets for different tasks and off-policy distributions. Experimental results are given in table 1
Performance on Optimal and Medium Quality Datasets : We first evaluate the performance of OVR when the dataset consists of optimal and mediocre logging policy data. We collected the dataset using a fully (expert) or partially (medium) trained SAC policy. We build our algorithm OVR on top of BCQ, denoted by BCQ + VAR. Note that the OVR algorithm can be agnostic to the behaviour policy too for computing the distribution ratio (Nachum et al., 2019a) and the variance. We observe that even
though performance is marginally improved with OVR under expert settings, since the demonstrations are optimal itself, we can achieve significant improvements under medium dataset regime. This is because OVR plays a more important role when there is larger variance due to distribution mismatch between the data logging and target policy distributions. Experimental results are shown in first two columns of figure 1.
Performance on Random and Mixed Datasets : We then evaluate the performance on random datasets, i.e, the worst-case setup when the data logging policy is a random policy, as shown in the last two columns of figure 1. As expected, we observe no improvements at all, and even existing baselines such as BCQ (Fujimoto et al., 2019) can work poorly under random dataset setting. When we collect data using a mixture of random and mediocre policy, denoted by mixed, the performance is again improved for OVR on top of BCQ, especially for the Hopper and Walker control domains. We provide additional experimental results and ablation studies in appendix E.1.
6 RELATED WORKS
We now discuss related works in offline RL, for evaluation and opimization, and its relations to variance and risk sensitive algorithms. We include more discussions of related works in appendix A.1. In off-policy evaluation, per-step importance sampling (Precup et al., 2000; 2001) have previously been used for off-policy evaluation function estimators. However, this leads to high variance estimators, and recent works proposed using marginalized importance sampling, for estimating stationary state-action distribution ratios (Liu et al., 2018; Nachum et al., 2019a; Zhang et al., 2019), to reduce variance but with additional bias. In this work, we build on the variance of marginalized IS, to develop variance risk sensitive offline policy optimization algorithm. This is in contrast to prior works on variance constrained online actor-critic (A. & Ghavamzadeh, 2016; Chow et al., 2017; Castro et al., 2012) and relates to constrained policy optimization methods (Achiam et al., 2017; Tessler et al., 2019).
For offline policy optimization, several works have recently addressed the overestimation problem in batch RL (Fujimoto et al., 2019; Kumar et al., 2019; Wu et al., 2019b), including the very recently proposed Conservative Q-Learning (CQL) algorithm (Kumar et al., 2020). Our work is done in parallel to CQL, due to which we do not include it as a baseline in our experiments. CQL learns a value function which is guaranteed to lower-bound the true value function. This helps prevent value over-estimation for out-of-distribution (OOD) actions, which is an important issue in offline RL. We
.
note that our approach is orthogonal to CQL in that CQL introduces a regularizer on the state action value function Qπ(s, a) based on the Bellman error (the first two terms in equation 2 of CQL), while we introduce a variance regularizer on the stationary state distribution dπ(s). Since the value of a policy can be expressed in two ways - either through Qπ(s, a) or occupancy measures dπ(s), both CQL and our paper are essentially motivated by the same objective of optimizing a lower bound on J(θ), but through different regularizers. Our work can also be considered similar to AlgaeDICE (Nachum et al., 2019b), since we introduce a variance regularizer based on the distribution corrections, instead of minimizing the f-divergence between stationary distributions in AlgaeDICE. Both our work and AlgaeDICE considers the dual form of the policy optimization objective in the batch setting, where similar to the Fenchel duality trick on our variance term, AlgaeDICE instead uses the variational form, followed by the change of variables tricks, inspired from (Nachum et al., 2019a) to handle their divergence measure.
7 DISCUSSION AND CONCLUSION
We proposed a new framework for offline policy optimization with variance regularization called OVR, to tackle high variance issues due to distribution mismatch in offline policy optimization. Our work provides a practically feasible variance constrained actor-critic algorithm that avoids double sampling issues in prior variance risk sensitive algorithms (Castro et al., 2012; A. & Ghavamzadeh, 2016). The presented variance regularizer leads to a lower bound to the true offline optimization objective, thus leading to pessimistic value function estimates, avoiding both high variance and overestimation problems in offline RL. Experimentally, we evaluate the significance of OVR on standard benchmark offline datasets, with different data logging off-policy distributions, and show that OVR plays a more significant role when there is large variance due to distribution mismatch. While we only provide a variance related risk sensitive approach for offline RL, for future work, it would be interesting other risk sensitive approaches (Chow & Ghavamzadeh, 2014; Chow et al., 2017) and examine its significance in batch RL. We hope our proposed variance regularization framework would provide new opportunities for developing practically robust risk sensitive offline algorithms.
A APPENDIX : ADDITIONAL DISCUSSIONS
A.1 EXTENDED RELATED WORK
Other related works : Several other prior works have previously considered the batch RL setting (Lange et al., 2012) for off-policy evaluation, counterfactual risk minimization (Swaminathan & Joachims, 2015a;b), learning value based methods such as DQN (Agarwal et al., 2019), and others (Kumar et al., 2019; Wu et al., 2019b). Recently, batch off-policy optimization has also been introduced to reduce the exploitation error (Fujimoto et al., 2019) and for regularizing with arbitrary behaviour policies (Wu et al., 2019b). However, due to the per-step importance sampling corrections on episodic returns (Precup et al., 2000), off-policy batch RL methods is challenging. In this work, we instead consider marginalized importance sampling corrections and correct for the stationary stateaction distributions (Nachum et al., 2019a; Uehara & Jiang, 2019; Zhang et al., 2020). Additionally, under the framework of Constrained MDPs (Altman & Asingleutility, 1999), risk-sensitive and constrained actor-critic algorithms have been proposed previously (Chow et al., 2017; Chow & Ghavamzadeh, 2014; Achiam et al., 2017). However, these works come with their own demerits, as they mostly require minimizing the risk (ie, variance) term, where finding the gradient of the variance term often leads a double sampling issue (Baird, 1995). We avoid this by instead using Fenchel duality (Boyd & Vandenberghe, 2004), inspired from recent works (Nachum & Dai, 2020; Dai et al., 2018) and cast risk constrained actor-critic as a max-min optimization problem. Our work is closely related to (Bisi et al., 2019), which also consider per-step variance of returns, w.r.t state occupancy measures in the on-policy setting, while we instead consider the batch off-policy optimization setting with per-step rewards w.r.t stationary distribution corrections.
Constrained optimization has previously been studied in in reinforcement learning for batch policy learning (Le et al., 2019), and optimization (Achiam et al., 2017), mostly under the framework of constrained MDPs (Altman & Asingleutility, 1999). In such frameworks, the cumulative return objective is augmented with a set of constraints, for safe exploration (Garcı́a et al., 2015; Perkins & Barto, 2003; Ding et al., 2020), or to reduce risk measures (Chow et al., 2017; A. & Fu, 2018; Castro et al., 2012). Batch learning algorithms (Lange et al., 2012) have been considered previously for counterfactual risk minimization and generalization (Swaminathan & Joachims, 2015a;b) and policy evaluation (Thomas et al., 2015a; Li et al., 2015), although little has been done for constrained offline policy based optimization. This raises the question of how can we learn policies in RL from fixed offline data, similar to supervised or unsupervised learning.
A.2 WHAT MAKES OFFLINE OFF-POLICY OPTIMIZATION DIFFICULT?
Offline RL optimization algorithms often suffer from distribution mismatch issues, since the underlying data distribution in the batch data may be quite different from the induced distribution under target policies. Recent works (Fujimoto et al., 2019; Kumar et al., 2019; Agarwal et al., 2019; Kumar et al., 2020) have tried to address this, by avoiding overestimation of Q-values, which leads to the extraplation error when bootstrapping value function estimates. This leads to offline RL agents generalizing poorly for unseen regions of the dataset. Additionally, due to the distribution mismatch, value function estimates can also have large variance, due to which existing online off-policy algorithms (Haarnoja et al., 2018; Lillicrap et al., 2016; Fujimoto et al., 2018) may fail without online interactions with the environment. In this work, we address the later problem to minimize variance of value function estimates through variance related risk constraints.
B APPENDIX : PER-STEP VERSUS EPISODIC VARIANCE OF RETURNS
Following from (Castro et al., 2012; A. & Ghavamzadeh, 2016), let us denote the returns with importance sampling corrections in the off-policy learning setting as :
Dπ(s, a) = T∑ t=0 γtr(st, at) ( T∏ t=1 π(at | st) µ(at | st) ) | s0 = s, a0 = a, τ ∼ µ (17)
From this definition in equation 17, the action-value function, with off-policy trajectory-wise importance correction is Qπ(s, a) = E(s,a)∼dµ(s,a)[Dπ(s, a)], and similarly the value function can be defined as : V π(s) = Es∼dµ(s)[Dπ(s)]. For the trajectory-wise importance corrections, we can
define the variance of the returns, similar to (A. & Fu, 2018) as : VP(π) = E(s,a)∼dµ(s,a)[D π(s, a)2]− E(s,a)∼dµ(s,a)[D π(s, a)]2 (18) where note that as in (Sobel, 1982), equation 18 also follows a Bellman like equation, although due to lack of monotonocitiy as required for dynamic programming (DP), such measures cannot be directly optimized by standard DP algorithms (A. & Fu, 2018).
In contrast, if we consider the variance of returns with stationary distribution corrections (Nachum et al., 2019a; Liu et al., 2018), rather than the product of importance sampling ratios, the variance term involves weighting the rewards with the distribution ratio ωπ/µ. Typically, the distribution ratio is approximated using a separate function class (Uehara & Jiang, 2019), such that the variance can be written as : Wπ(s, a) = ωπ/D(s, a) · r(s, a) | s = s, a ∼ π(· | s), (s, a) ∼ dD(s, a) (19) where we denote D as the data distribution in the fixed dataset, collected by either a known or unknown behaviour policy. The variance of returns under occupancy measures is therefore given by :
VD(π) = E(s,a)∼dD(s,a) [ Wπ(s, a)2 ] − E(s,a)∼dD(s,a) [ Wπ(s, a) ]2 (20)
where note that the variance expression in equation 20 depends on the square of the per-step rewards with distribution correction ratios. We denote this as the dual form of the variance of returns, in contrast to the primal form of the variance of expected returns (Sobel, 1982).
Note that even though the variance term under episodic per-step importance sampling corrections in equation 18 is equivalent to the variance with stationary distribution corrections in equation 20, following from (Bisi et al., 2019), considering per-step corrections, we will show that the variance with distribution corrections indeed upper bounds the variance of importance sampling corrections. This is an important relationship, since constraining the policy improvement step under variance constraints with occupancy measures therefore allows us to obtain a lower bound to the offline optimization objective, similar to (Kumar et al., 2020).
B.1 PROOF OF LEMMA 1 : VARIANCE INEQUALITY
Following from (Bisi et al., 2019), we show that the variance of per-step rewards under occupancy measures, denoted by VD(π) upper bounds the variance of episodic returns VP(π).
VP(π) ≤ VD(π) (1− γ)2
(21)
Proof. Proof of Lemma 1 following from (Bisi et al., 2019) is as follows. Denoting the returns, as above, but for the on-policy case with trajectories under π, as Dπ(s, a) = ∑∞ t=0 γ tr(st, at), and
denoting the return objective as J(π) = Es0∼ρ,at∼π(·|st),s′∼P [ Dπ(s, a) ] , the variance of episodic
returns can be written as : VP(π) = E(s,a)∼dπ(s,a) [( Dπ(s, a)− J(π)
(1− γ)
)2] (22)
= E(s,a)∼dπ(s,a) [ (Dπ(s, a))2 ] + J(π)
(1− γ)2 − 2J(π) (1− γ) E(s,a)∼dπ(s,a)
[ Dπ(s, a) ] (23)
= E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (24)
Similarly, denoting returns under occupancy measures as Wπ(s, a) = dπ(s, a)r(s, a), and the returns under occupancy measures, equivalently written as J(π) = E(s,a)∼dπ(s,a)[r(s, a)] based on the primal and dual forms of the objective (Uehara & Jiang, 2019; Nachum & Dai, 2020), we can equivalently write the variance as :
VD(π) = E(s,a)∼dπ(s,a) [( r(s, a)− J(π) )2] (25)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] + J(π)2 − 2J(π)E(s,a)∼dπ(s,a)[r(s, a)] (26)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] − J(π)2 (27)
Following from equation 22 and 25, we therefore have the following inequality : (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( ∞∑ t=0 γt )( ∞∑ t=0 γtr(st, at) 2 )]
(28)
= (1− γ)Es0∼ρ,a∼π [ ∞∑ t=0 γtr(st, at) 2 ]
(29)
= E(s,a)∼dπ(s,a) [ r(s, a)2 ] (30)
where the first line follows from Cauchy-Schwarz inequality. This concludes the proof.
We can further extend lemma 1, for off-policy returns under stationary distribution corrections (ie, marginalized importance sampling) compared importance sampling. Recall that we denote the variance under stationary distribution corrections as :
VD(π) = E(s,a)∼dD(s,a) [( ωπ/D(s, a) · r(s, a)− J(π) )2] (31)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ] − J(π)2 (32)
where J(π) = E(s,a)∼dD(s,a) [ ωπ/D(s, a) · r(s, a) ] . We denote the episodic returns with importance
sampling corrections as : Dπ = ∑T t=0 γ trtρ0:t. The variance, as denoted earlier is given by :
VP(π) = E(s,a)∼dπ(s,a) [ Dπ(s, a)2 ] − J(π) 2
(1− γ)2 (33)
We therefore have the following inequality (1− γ)2Es0∼ρ,a∼π [ Dπ(s, a)2 ] ≤ (1− γ)2Es0∼ρ,a∼π [( T∑ t=0 γt )( T∑ t=0 γtr(st, at) 2 )( T∏ t=0 π(at|st) µD(at|st) )2] = (1− γ)Es0∼ρ,a∼π
[ ∞∑ t=0 γtr(st, at) 2 ( T∏ t=0 π(at|st) µD(at|st) )2] (34)
= E(s,a)∼dD(s,a) [ ωπ/D(s, a) 2 · r(s, a)2 ]
(35) which shows that lemma 1 also holds for off-policy returns with stationary distribution corrections.
B.2 DOUBLE SAMPLING FOR COMPUTING GRADIENTS OF VARIANCE
The gradient of the variance term often leads to the double sampling issue, thereby making it impractical to use. This issue has also been pointed out by several other works (A. & Ghavamzadeh, 2016; Castro et al., 2012; Chow et al., 2017), since the variance involves the squared of the objective function itself. Recall that we have:
VD(θ) = E(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]}2 (36)
The gradient of the variance term is therefore : ∇θVD(θ) = ∇θE(s,a)∼dD [ ωπ/D(s, a) · r(s, a)2 ] − 2 · { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} · ∇θ { E(s,a)∼dD [ ωπ/D(s, a) · r(s, a) ]} (37)
where equation 37 requires multiple samples to compute the expectations in the second term. To see why this is true, let us denote J(θ) = EdD(s,a) [ ωπ/D(s, a)︸ ︷︷ ︸ ·r(s, a)IS(ω,πθ) ] where we have IS(ω, πθ) as the returns in short form. The variance of the returns with the stationary state-action distribution corrections can therefore be written as :
VD(θ) = EdD(s,a) [ IS(ω, πθ)2 ] ︸ ︷︷ ︸
(a)
−EdD(s,a) [ IS(ω, πθ) ]2 ︸ ︷︷ ︸
(b)
(38)
We derive the gradient of each of the terms in (a) and (b) in equation 38 below. First, we find the gradient of the variance term w.r.t θ : ∇θEdD(s,a) [ IS(ω, πθ)2 ] = ∇θ ∑ s,a dD(s, a)IS(ω, πθ)2 = ∑ s,a dD(s, a)∇θIS(ω, πθ)2
= ∑ s,a dD(s, a) · 2 · IS(ω, πθ) · IS(ω, πθ) · ∇θ log πθ(a | s)
= 2 · ∑ s,a dD(s, a)IS(ω, πθ)2∇θ log πθ(a | s)
= 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s)
] (39)
Equation 39 interestingly shows that the variance of the returns w.r.t πθ has a form similar to the policy gradient term, except the critic estimate in this case is given by the importance corrected returns, since IS(ω, πθ) = [ωπ/D(s, a) · r(s, a)]. We further find the gradient of term (b) from equation 38. Finding the gradient of this second term w.r.t θ is therefore :
∇θEdD(s,a) [ IS(ω, πθ) ]2 = ∇θJ(θ)2 = 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (40)
Overall, the expression for the gradient of the variance term is therefore : ∇θVD(θ) = 2 · EdD(s,a) [ IS(ω, πθ)2 · ∇θ log πθ(a | s) ] − 2 · J(θ) · EdD(s,a) [ ωπ/D · {∇θ log πθ(a | s) ·Qπ(s, a)} ] (41)
The variance gradient in equation 41 is difficult to estimate in practice, since it involves both the gradient of the objective and the objective J(θ) itself. This is known to have the double sampling issue (Baird, 1995) which requires separate independent rollouts. Previously, (Castro et al., 2012) tackled the variance of the gradient term using simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992), where we can keep running estimates of both the return and the variance term, and use a two time scale algorithm for computing the gradient of the variance regularizer with per-step importance sampling corrections.
B.3 ALTERNATIVE DERIVATION : VARIANCE REGULARIZATION VIA FENCHEL DUALITY
In the derivation of our algorithm, we applied the Fenchel duality trick to the second term of the variance expression 25. An alternative way to derive the proposed algorithm would be to see what happens if we apply the Fenchel duality trick to both terms of the variance expression. This might be useful since equation 41 requires evaluating both the gradient terms and the actual objective J(θ), due to the analytical expression of the form ∇θJ(θ) · J(θ), hence suffering from a double sampling issue. In general, the Fenchel duality is given by :
x2 = max y (2xy − y2) (42) and applying Fenchel duality to both the terms, since they both involve squared terms, we get :
EdD(s,a) [ IS(ω, πθ)2 ] ≡ EdD(s,a) [ max y { 2 · IS(ω, πθ) · y(s, a)− y(s, a)2 }] = 2 ·max
y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]} (43) Similarly, applying Fenchel duality to the second (b) term we have :
EdD(s,a) [ IS(ω, πθ) ]2 = max
ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (44)
Overall, we therefore have the variance term, after applying Fenchel duality as follows, leading to an overall objective in the form maxymaxν VD(θ), which we can use as our variance regularizer
VD(θ) = 2 ·max y
{ EdD(s,a) [ IS(ω, πθ) · y(s, a) ] − EdD(s,a) [ y(s, a)2 ]}
−max ν
{ 2 · EdD(s,a) [ IS(ω, πθ) · ν(s, a) ] − ν2 } (45)
Using the variance of stationary distribution correction returns as a regularizer, we can find the gradient of the variance term w.r.t θ as follows, where the gradient terms dependent on the dual variables y and ν are 0.
∇θVD(θ) = 2 · ∇θEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 0− 2 · ∇θEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] + 0
= 2·EdD(s,a) [ IS(ω, πθ)·y(s, a)·∇θ log πθ(a | s) ] −2·EdD(s,a) [ IS(ω, πθ)·ν(s, a)·∇θ log πθ(a | s) ]
= 2 · EdD(s,a) [ IS(ω, πθ) · ∇θ log πθ(a | s) · { y(s, a)− ν(s, a) }] (46)
Note that from equation 46, the two terms in the gradient is almost equivalent, and the difference comes only from the difference between the two dual variables y(s, a) and ν(s, a). Note that our variance term also requires separately maximizing the dual variables, both of which has the following closed form updates :
∇νVD(θ) = −2 · ∇νEdD(s,a) [ IS(ω, πθ) · ν(s, a) ] +∇νν2 = 0 (47)
Solving which exactly, leads to the closed form solution ν(s, a) = EdD(s,a) [ IS(ω, πθ) ] . Similarly,
we can also solve exactly using a closed form solution for the dual variables y, such that : ∇yVD(θ) = 2 · ∇yEdD(s,a) [ IS(ω, πθ) · y(s, a) ] − 2 · ∇yEdD(s,a) [ y(s, a)2 ] = 0 (48)
Solving which exactly also leads to the closed form solution, such that y(s, a) = 12 · IS(ω, πθ) = 1 2 · dπ(s,a) dµ(s,a)
· r(s, a). Note that the exact solutions for the two dual variables are similar to each other, where ν(s, a) is the expectation of the returns with stationary distribution corrections, whereas y(s, a) is only the return from a single rollout.
C APPENDIX : MONOTONIC PERFORMANCE IMPROVEMENT GUARANTEES
UNDER VARIANCE REGULARIZATION
We provide theoretical analysis and performance improvements bounds for our proposed variance constrained policy optimization approach. Following from (Kakade & Langford, 2002; Schulman et al., 2015; Achiam et al., 2017), we extend existing performance improvement guarantees based on the stationary state-action distributions instead of only considering the divergence between the current policy and old policy. We show that existing conservative updates in algorithms (Schulman et al., 2015) can be considered for both state visitation distributions and the action distributions, as similarly pointed by (Achiam et al., 2017). We can then adapt this for the variance constraints instead of the divergence constraints. According to the performance difference lemma (Kakade & Langford, 2002), we have that, for all policies π and π′ :
J(π′)− J(π) = Es∼dπ′ ,a∼π′ [A π(s, a)] (49)
which implies that when we maximize 49, it will lead to an improved policy π′ with policy improvement guarantees over the previous policy π. We can write the advantage function with variance augmented value functions as :
Aπλ = Q π λ(s, a)− V πλ (s) = Es′∼P [ r(s, a)− λ(r(s, a)− J(π))2 + γV πλ (s′)− V πλ (s) ] However, equation 49 is often difficult to maximize directly, since it additionally requires samples from π′ and dπ′ , and often a surrogate objective is instead proposed by (Kakade & Langford, 2002). Following (Schulman et al., 2015), we can therefore obtain a bound for the performance difference based on the variance regularized advantage function :
J(π′) ≥ J(π) + Es∼dπ(s),a∼π′(a|s) [ Aπλ(s, a) ] (50)
where we have the augmented rewards for the advantage function, and by following Fenchel duality for the variance, can avoid policy dependent reward functions. Otherwise, we have the augmented rewards for value functions as r̃(s, a) = r(s, a)− λ(r(s, a)− J(π))2. This however suggests that the performance difference does not hold without proper assumptions (Bisi et al., 2019). We can therefore obtain a monotonic improvement guarantee by considering the KL divergence between
policies : Lπ(π′) = J(π) + Es∼dπ,a∼π′ [Aπ(s, a)] (51) which ignores the changes in the state distribution dπ′ due to the improved policy π′. (Schulman et al., 2015) optimizes the surrogate objectives Lπ(π′) while ensuring that the new policy π′ stays close to the current policy π, by imposing a KL constraint (Es∼dπ [DKL(π′(· | s)||π(· | s)] ≤ δ). The performance difference bound, based on the constraint between π and π′ as in TRPO (Schulman et al., 2015) is given by :
Lemma 4. The performance difference lemma in (Schulman et al., 2015), where α = DmaxTV = maxsDTV(π, π′)
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 (DmaxTV (π′||π))2 (52)
where = maxs,a |Aπ(s, a)|, which is usually denoted with α, where
The performance improvement bxound in (Schulman et al., 2015) can further be written in terms of the KL divergence by following the relationship between total divergence (TV) and KL, which follows from Pinsker’s inequality, DTV(p||q)2 ≤ DKL(p||q), to get the following improvement bound :
J(π′) ≥ Lπ(π′)− 4 γ
(1− γ)2 DKL(π′||π) (53)
We have a performance difference bound in terms of the state distribution shift dπ′ and dπ. This justifies that Lπ(π′) is a sensible lower bound to J(π′) as long as there is a total variation distance between dπ′ and dπ which ensures that the policies π′ and π stay close to each other. Finally, following from (Achiam et al., 2017), we obtain the following lower bound, which satisfies policy improvement guarantees :
J(π′) ≥ Lπ(π′)− 2γ π
1− γ Es∼dπ [DTV(π′(· | s)||π(· | s))] (54)
Equation 53 and 54 assumes that there is no state distribution shift between π′ and π. However, if we explicitly assume state distribution changes, dπ′ and dπ due to π′ and π respectively, then we have the following performance improvement bound :
Lemma 5. For all policies π′ and π, we have the performance improvement bound based on the total variation of the state-action distributions dπ′ and dπ J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) (55) where π = maxs |Ea∼π′(·|s)[Aπ(s, a)]|
which can be further written in terms of the surrogate objective Lπ(π′) as : J(π′) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ) = Lπ(π′)− πDTV(dπ′ ||dπ) (56)
C.1 PROOF OF THEOREM 1 : POLICY IMPROVEMENT BOUND WITH VARIANCE REGULARIZATION
Proof. We provide derivation for theorem 1. Recall that for all policies π′ and π, and corresponding state visitation distributions dπ′ and dπ , we can obtain the performance improvement bound in terms of the variance of state-action distribution corrections
J(π′)− J(π) ≥ Es∼dπ,a∼π′ [ Aπ(s, a) ] − Vars∼dπ,a∼π [ f(s, a) ] (57)
where f(s, a) is the dual function class, for the divergence between dπ′(s, a) and dπ(s, a) Following from Pinsker’s inequality, the performance difference lemma written in terms of the state visitation distributions can be given by :
J(π′) ≥ Lπ(π′)− πDTV(dπ′ ||dπ) ≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− πDTV(dπ′ ||dπ)
≥ J(π) + Es∼dπ,a∼π′ [Aπ(s, a)]− π √ DKL(dπ′ ||dπ) (58)
Following from (Schulman et al., 2015), we can alternately write this follows, where we further apply the variational form of TV J(π′) ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [ DTV(dπ′ ||dπ)2 ] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C · Es∼dπ [( max f {Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)]}
)2] ≥ J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Es∼dπ
[( Es∼dπ′ ,a∼π[f(s, a)]− Es∼dπ,a∼π[f(s, a)] )2] = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f
{( Es∼dπ,a∼π[f(s, a)]− Es∼dπ,a∼π[Es∼dπ,a∼π[f(s, a)]] )2} = J(π) + Es∼dπ,a∼π′ [ Aπ(s, a) ] − C ·max
f Vars∼dπ,a∼π
[ f(s, a) ] (59)
Therefore the policy improvement bound depends on maximizing the variational representation f(s, a) of the f-divergence to guaranetee improvements from J(π) to J(π′). This therefore leads to the stated result in theorem 1.
D APPENDIX : LOWER BOUND OBJECTIVE WITH VARIANCE REGULARIZATION
D.1 PROOF OF LEMMA 3
Recalling lemma 3 which states that, the proof of this follows from (Metelli et al., 2018). We extend this for marginalized importance weighting, and include here for completeness. Note that compared to importance weighting, which leads to an unbiased estimator as in (Metelli et al., 2018), correcting for the state-action occupancy measures leads to a biased estimator, due to the approximation ω̂π/D. However, for our analysis, we only require to show a lower bound objective, and therefore do not provide any bias variance analysis as in off-policy evaluation.
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (60)
Proof. Assuming that state action samples are drawn i.i.d from the dataset D, we can write : Var(s,a)∼dD(s,a) [ ω̂π/D(s, a) ] ≤ 1 N Var(s1,a1)∼dD(s,a) [ dπ(s1, a1 dD(s1, a1) · r(s1, a1) ]
≤ 1 N E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2]
≤ 1 N ||r||2∞E(s1,a1)∼dD(s,a) [( dπ(s1, a1) dD(s1, a1) · r(s1, a1) )2] = 1 N ||r||2∞F2(dπ||dD) (61)
D.2 PROOF OF THEOREM 2:
First let us recall the stated theorem 2. By constraining the off-policy optimization problem with variance constraints, we have the following lower bound to the optimization objective with stationary state-action distribution corrections
J(π) ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) r(s, a)]− √ 1− δ δ Var(s,a)∼dµ(s,a)[ dπ(s, a) dD(s, a) r(s, a)] (62)
Proof. The proof for the lower bound objective can be obtained as follows. We first define a relationship between the variance and the α-divergence with α = 2, as also similarly noted in (Metelli et al., 2018). Given we have batch samples D, and denoting the state-action distribution correction with ωπ/D(s, a), we can write from lemma 3 :
Var(s,a)∼dD(s,a) [ ω̂π/D ] ≤ 1 N ||r||2∞F2(dπ||dD) (63)
where the per-step estimator with state-action distribution corrections is given by ωπ/D(s, a) · r(s, a). Here, the reward function r(s, a) is a bounded function, and for any N > 0 the variance of the
per-step reward estimator with distribution corrections can be upper bounded by the Renyi-divergence (α = 2). Finally, following from (Metelli et al., 2018) and using Cantelli’s inequality, we have with probability at least 1− δ where 0 < δ < 1 :
Pr ( ωπ/D − J(π) ≥ λ ) ≤ 1
1 + λ 2 Var(s,a)∼dD(s,a)[ωπ/D(s,a)·r(s,a)] (64)
and by using δ = 1 1+ λ 2
Var(s,a)∼dD(s,a) [ωπ/D(s,a)·r(s,a)]
we get that with probability at least 1− δ, we have:
J(π) = E(s,a)∼dπ(s,a) ≥ E(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)]− √ 1− δ δ
Var(s,a)∼dD(s,a)[ωπ/D(s, a) · r(s, a)] (65)
where we can further replace the variance term with α = 2 for the Renyi divergence to conclude the proof for the above theorem. We can further write the lower bound for for α-Renyi divergence, following the relation between variance and Renyi-divergence for α = 2 as :
J(π) = E(s,a)∼dπ(s,a)[r(s, a)] ≥ E(s,a)∼dD(s,a)[ dπ(s, a)
dD(s, a) · r(s, a)]− ||r||∞
√ (1− δ)d2(dπ||dD)
δN This hints at the similarity between our proposed variance regularized objective with that of other related works including AlgaeDICE (Nachum et al., 2019b) which uses a f-divergence D f (dπ||dD) between stationary distributions.
E APPENDIX : ADDITIONAL EXPERIMENTAL RESULTS
E.1 EXPERIMENTAL ABLATION STUDIES
In this section, we present additional results using state-action experience replay weightings on existing offline algorithms, and analysing the significance of our variance regularizer on likelihood corrected offline algorithms. Denoting ω(s, a) for the importance weighting of state-action occupancy measures based on samples in the experience replay buffer, we can modify existing offline algorithms to account for state-action distribution ratios.
The ablation experimental results using the Hopper control benchmark are summarized in figure 2. The same base BCQ algorithm is used with a modified objective for BCQ (Fujimoto et al., 2019) where the results for applying off-policy importance weights are denoted as “BCQ+I.W.”. We employ the same technique to obtain ω(s, a) for both the baseline and for adding variance regularization as described. The results suggest that adding the proposed per-step variance regularization scheme significantly outperforms just importance weighting the expected rewards for off-policy policy learning.
E.2 EXPERIMENTAL RESULTS IN CORRUPTED NOISE SETTINGS
We additionally consider a setting where the batch data is collected from a noisy environment, i.e, in settings with corrupted rewards, r → r + , where ∼ N (0, 1). Experimental results are presented in figures 1, 3. From our results, we note that using OVR on top of BCQ (Fujimoto et al., 2019), we can achieve significantly better performance with variance minimization, especially when the agent is given sub-optimal demonstrations. We denote it as medium (when the dataset was collected by a half trained SAC policy) or a mixed behaviour logging setting (when the data logging policy is a mixture of random and SAC policy). This is also useful for practical scalability, since often data collection is
expensive from an expert policy. We add noise to the dataset, to examine the significance of OVR under a noisy corrupted dataset setting.
E.3 EXPERIMENTAL RESULTS ON SAFETY BENCHMARK TASKS
Safety Benchmarks for Variance as Risk : We additionally consider safety benchmarks for control tasks, to analyse the significance of variance regularizer as a risk constraint in offline policy optimization algorithms. Our results are summarized in table 3.
E.4 DISCUSSIONS ON OFFLINE OFF-POLICY OPTIMIZATION WITH STATE-ACTION DISTRIBUTION RATIOS
In this section, we include several alternatives by which we can compute the stationary state-action distribution ratio, borrowing from recent works (Uehara & Jiang, 2019; Nachum et al., 2019a).
Off-Policy Optimization with Minimax Weight Learning (MWL) : We discuss other possible ways of optimizing the batch off-policy optimization objective while also estimating the state-action density ratio. Following from (Uehara & Jiang, 2019) we further modify the off-policy optimization part of the objective J(θ) in L(θ, λ) as a min-max objective, consisting of weight learning ωπ/D
Table 3: Results on the Safety-Gym environments Ray et al.. We report the mean and S.D. of episodic returns and costs over five random seeds and 1 million timesteps. The goal of the agent is to maximize the episodic return, while minimizing the cost incurred.
PointGoal1 PointGoal2 Reward Cost Reward Cost
BCQ 43.1 ± 0.3 137.0 ± 3.6 32.7± 0.7 468.2 ± 9.1 BCQ+OVR 44.2 ± 0.3 127.1 ± 4.0 33.2 ± 0.7 453.9 ± 7.3
PointButton1 PointButton2 Reward Cost Reward Cost
BCQ 30.9 ± 2.2 330.8 ± 8.3 18.1 ± 1.1 321.6 ± 4.1 BCQ+OVR 30.7 ± 2.3 321.5 ± 6.8 19.6 ± 1.0 305.7 ± 6.1
and optimizing the resulting objective J(θ, ω). We further propose an overall policy optimization objective, where a single objective can be used for estimating the distribution ratio, evaluating the critic and optimizing the resulting objective. We can write the off-policy optimization objective with its equivalent starting state formulation, such that we have :
EdD(s,a) [ ωπθ/D(s, a) · r(s, a) ] = (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (66)
Furthermore, following Bellman equation, we expect to have E[r(s, a)] = E[Qπ(s, a)−γQπ(s′, a′)] EdD(s,a) [ ωπθ/D(s, a)·{Q π(s, a)−γQπ(s′, a′)} ] = (1−γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (67)
We can therefore write the overall objective as : J(ω, πθ, Q) = EdD(s,a) [ ωπθ/D(s, a) · {Q π(s, a)− γQπ(s′, a′)} ]
− (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (68)
This is similar to the MWL objective in (Uehara & Jiang, 2019) except we instead consider the bias reduced estimator, such that accurate estimates of Q or ω will lead to reduced bias of the value function estimation. Furthermore, note that in the first part of the objective J(πθ, ω,Q)2, we can further use entropy regularization for smoothing the objective, since instead ofQπ(s′, a′) in the target, we can replace it with a log-sum-exp and considering the conjugate of the entropy regularization term, similar to SBEED (Dai et al., 2018). This would therefore give the first part of the objective as an overall min-max optimization problem :
J(ω, πθ) = Edµ(s,a) [ ωπθ/D(s, a) · {r(s, a) + γQ π(s′, a′) + τ log π(a | s)−Qπ(s, a)} ]
+ (1− γ)Es0∼β0(s),a0∼π(·|s0) [ Qπ(s0, a0) ] (69)
such that from our overall constrained optimization objective for maximizing θ, we have turned it into a min-max objective, for estimating the density ratios, estimating the value function and maximizing the policies
ω∗π/D, Q ∗, π∗ = argmin
ω,Q argmax π J(πθ, ω,Q)
2 (70)
where the fixed point solution for the density ratio can be solved by minimizing the objective :
ω∗π/D = argmin ω
L(ωπ/D, Q)2 = Edµ(s,a) [ {γω(s, a) ·Qπ(s′, a′)− ω(s, a)Qπ(s, a)}+
(1− γ)Eβ(s,a)Qπ(s0, a0) ] (71)
DualDICE : In contrast to MWL (Uehara & Jiang, 2019), DualDICE (Nachum et al., 2019a) introduces dual variables through the change of variables trick, and minimizes the Bellman residual of the dual variables ν(s, a) to estimate the ratio, such that : ν∗(s, a)− Bπν∗(s, a) = ωπ/D(s, a) (72) the solution to which can be achieved by optimizing the following objective
min ν L(ν) = 1 2 EdD
[ (ν − Bπν)(s, a)2 ] − (1− γ)Es0,a0∼β(s,a) [ ν(s0, a0) ] (73)
Minimizing Divergence for Density Ratio Estimation : The distribution ratio can be estimated using an objective similar to GANs (Goodfellow et al., 2014; Ho & Ermon, 2016), as also similarly
proposed in (Kostrikov et al., 2019).
max h G(h) = E(s,a)∼dD
[ log h(s, a) ] + E(s,a)∼dπ [ log(1− h(s, a)) ] (74)
where h is the discriminator class, discriminating between samples from dD and dπ. The optimal discriminator satisfies :
log h∗(s, a)− log(1− h∗(s, a)) = log dD(s, a) dπ(s, a)
(75)
The optimal solution of the discriminator is therefore equivalent to minimizing the divergence between dπ and dD, since the KL divergence is given by :
−DKL(dπ||dD) = E(s,a)∼dπ [ log dD(s, a)
dπ(s, a)
] (76)
Additionally, using the Donsker-Varadhan representation, we can further write the KL divergence term as :
−DKL(dπ||dD) = min x
logE(s,a)∼dD [ expx(s, a) ] − E(s,a)∼dπ [ x(s, a) ] (77)
such that now, instead of the discriminator class h, we learn the function class x, the optimal solution to which is equivalent to the distribution ratio plus a constant
x∗(s, a) = log dπ(s, a)
dD(s, a) (78)
However, note that both the GANs like objective | 1. What are the issues with the derivations in the paper, specifically regarding Equations 5 and 6?
2. Why is the definition of variance in the paper unclear, and how does it differ from the traditional definition of variance in mean-variance optimization?
3. How does the proposed method attempt to achieve a trade-off between E[Q] and V_D(\pi), and why do the empirical results not convincingly demonstrate this trade-off?
4. What key deductions in the paper lack proof or may not hold in off-policy settings, such as Equation 8?
5. What are some missing references in the paper regarding the use of Fenchel duality to solve the double-sampling issue in mean-variance optimization using variance as regularization?
6. How does Algorithm 1 in the paper compare to the offline MVPI in Zhang et al. (2020)? | Review | Review
Summary: This work is based on a recent work of [Bisi et al., 2019], where a per-step reward formulation is presented with some outstanding unresolved problems, e.g., the double-sampling issue and the policy-dependent reward issue. This paper proposes to use the Fenchel duality to solve the double sampling problem and extend it to the behavior-agnostic off-policy setting by leveraging the density ratio estimation technique.
Major concerns:
1 The derivations in several major equations are WRONG. The objective in Eq 5 is NOT the same as the objective in Eq 6. \E_{s, a ~ d_D}[\omega(s, a)r(s, a)] in Eq 6 is NOT equal to \E_{s~D}[Q^\pi(s, \pi(s)] in Eq 5.
The motivation and empirical demonstration of the variance regularization are unclear. First, the definition of the variance doesn’t make sense to me. The variance in mean-variance optimization is a long-established term, which refers to the var of the return (either one-step or cumulative). So it is not clear why V_P makes sense without further motivation or reference. Moreover, the variance term defined in Eq. 2 is very weird. It is neither the variance of the return nor the so-termed “variance of the marginalized IS” (since it involves the reward r). By definition, the variance V_D is the variance of d_\pi(s, a)/d_\mu(s, a)r(s, a). This expression involves both \pi, \mu, and r, and its randomness comes from the randomness of (s, a). It is unclear why minimizing this variance is useful.
Second, the empirical results are not convincing. As pointed out by Eq. 3, the overall objective is to achieve a trade-off E[Q] and V_D(\pi) through \lambda. The empirical results, however, do not show this trade-off. Then it becomes unclear where the empirical improvement comes from. I would like to see how changing \lambda influences V_D(\pi).
3 Several key deductions, which hold in on-policy cases, may NOT be true in this paper’s off-policy setting. For example, Eq . 8. lacks proof. In Bisi’s setting, this holds only for on-policy cases. I don’t think it still holds for off-policy settings. The author needs to prove it. The MDP setting is unclear. The authors consider an infinite horizon MDP, so what does T mean? The proof of Lemma 1 also seems problematic. First, without a clear definition of T, there is no way to check the proof of Lemma1. Is T a random variable? Second, Eq 33 is wrong. Eq 33 is the same as Eq 24, but the definition of D^\pi is different, so how can they be the same? Moreover, It is hard follow the inequality in Eq 34. It looks wrong to me. Thee reviewer strongly suggest that the author write it step by step to make it clear? And also, it would be great to show how the products of IS in Eq 34 reduce to the density ratio in Eq 35.
Theorem 2, which is the paper's major theoretical contribution, is obvious and trivial. By definition, the first term of RHS of Eq 16 is exactly J(\pi). So what theorem 2 says is that J(\pi) \geq J(\pi) \sqrt{c * variance of sth}}. This is fairly obvious and does not bring in any insight.
The entire Appendix B.2 is wrong, where the 3rd equality (aka, line 2 of Eq. 39) does NOT necessarily hold. The term d_\pi(\theta)’s gradient is not computed at all, and therefore, any results afterward are not correct.
There are some missing references as well. Using Fenchel Duality to solve the double-sampling issue in mean-variance optimization using variance as regularization has been solved by previous literature, e.g., Xie et al., (2018) and Zhang et al., (2020). The author should acknowledge this. Also, I encourage the authors to compare with them. Especially I think the authors may want to compare with Zhang et al., (2020). Algorithm 1 is very similar to the offline MVPI in Zhang et al., (2020). There is only a slight difference in computing the augmented reward.
Xie, T., Liu, B., Xu, Y., Ghavamzadeh, M., Chow, Y., Lyu, D., & Yoon, D. (2018). A block coordinate ascent algorithm for mean-variance optimization. In Advances in Neural Information Processing Systems (pp. 1065-1075). Zhang, S., Liu, B., & Whiteson, S. (2020). Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning. arXiv preprint arXiv:2004.10888. |
ICLR | Title
Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw
Abstract
This paper proposes a novel pretext task for self-supervised video representation learning by exploiting spatiotemporal continuity in videos. It is motivated by the fact that videos are spatiotemporal by nature and a representation learned to detect spatiotemporal continuity/discontinuity is thus beneficial for downstream video content analysis tasks. A natural choice of such a pretext task is to construct spatiotemporal (3D) jigsaw puzzles and learn to solve them. However, this task turns out to be intractable. We thus propose Constrained Spatiotemporal Jigsaw (CSJ) whereby the 3D jigsaws are formed in a constrained manner to ensure that large continuous spatiotemporal cuboids exist in a shuffled clip to provide sufficient cues for the model to reason about the continuity. With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Extensive experiments show that our CSJ achieves state-of-the-art on two downstream tasks across various benchmarks.
1 INTRODUCTION
Self-supervised learning (SSL) has achieved tremendous successes recently for static images (He et al., 2020; Chen et al., 2020) and shown to be able to outperform supervised learning on a wide range of downstream image understanding tasks. However, such successes have not yet been reproduced for videos. Since different SSL models differ mostly on the pretext tasks employed on the unlabeled training data, designing pretext tasks more suitable for videos is the current focus for self-supervised video representation learning (Han et al., 2020; Wang et al., 2020).
Videos are spatiotemporal data and spatiotemporal analysis is the key to many video content understanding tasks. A good video representation learned from the self-supervised pretext task should therefore capture discriminative information jointly along both spatial and temporal dimensions. It is thus somewhat counter-intuitive to note that most existing SSL pretext tasks for videos do not explicitly require joint spatiotemporal video understanding. For example, some spatial pretext tasks have been borrowed from images without any modification (Jing et al., 2018), ignoring the temporal dimension. On the other hand, many recent video-specific pretext tasks typically involve speed or temporal order prediction (Lee et al., 2017; Wei et al., 2018; Benaim et al., 2020; Wang et al., 2020), i.e., operating predominately along the temporal axis.
A natural choice for a spatiotemporal pretext task is to solve 3D jigsaw puzzles, whose 2D counterpart has been successfully used for images (Noroozi & Favaro, 2016). Indeed, solving 3D puzzles requires the learned model to understand spatiotemporal continuity, a key step towards video content understanding. However, directly solving a 3D puzzle turns out to be intractable: a puzzle of 3×3×3 pieces (the same size as a Rubik’s cube) can have 27! possible permutations. Video volume even in a short clip is much larger than that. Nevertheless, the latest neural sorting models (Paumard et al., 2020; Du et al., 2020) can only handle permutations a few orders of magnitude less, so offer no solution. This is hardly surprising because such a task is daunting even for humans: Most people would struggle with a standard Rubik’s cube, let alone a much larger one.
In this paper, we propose a novel Constrained Spatiotemporal Jigsaw (CSJ) pretext task for selfsupervised video representation learning. The key idea is to form 3D jigsaw puzzles in a constrained manner so that it becomes solvable. This is achieved by factorizing the permutations (shuffling)
into the three spatiotemporal dimensions and then applying them sequentially. This ensures that for a given video clip, large continuous spatiotemporal cuboids exist after the constrained shuffling to provide sufficient cues for the model to reason about spatiotemporal continuity (see Fig. 1(b)(c)). Such large continuous cuboids are also vital for human understanding of video as revealed in neuroscience and visual studies (Stringer et al., 2006; Chen et al., 2019). Even with the constrained puzzles, solving them directly could still be extremely hard. Consequently, instead of directly solving the puzzles (i.e., recovering the permutation matrix so that each piece can be put back), four surrogate tasks are carefully designed. They are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Concretely, given a video clip shuffled with our constrained permutations, we make sure that the top-2 largest continuous cuboids (LCCs) dominate the clip volume. The level of continuity in the shuffle clip as a whole is thus determined mainly by the volumes of these LCCs, and whether they are at the right order (see Fig. 1(d)(e)) both spatially and temporally. Our surrogate tasks are thus designed to locate these LCCs and predict their order so that the model learned with these tasks can be sensitive to spatiotemporal continuity both locally and globally.
Our main contributions are three-fold: (1) We introduce a new pretext task for self-supervised video representation learning called Constrained Spatiotemporal Jigsaw (CSJ). To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding. (2) We propose a novel constrained shuffling method to construct easy 3D jigsaws containing large LCCs. Four surrogate tasks are then formulated in place of the original jigsaw solving tasks. They are much more solvable yet remain effective in learning spatiotemporal discriminative representations. (3) Extensive experiments show that our approach achieves state-ofthe-art on two downstream tasks across various benchmarks.
2 RELATED WORK
Self-supervised Learning with Pretext Tasks Self-supervised learning (SSL) typically employs a pretext task to generate pseudo-labels for unlabeled data via some forms of data transformation. According to the transformations used by the pretext task, existing SSL methods for video presentation learning can be divided into three categories: (1) Spatial-Only Transformations: Derived from the original image domain (Gidaris et al., 2018), Jing et al. (2018) leveraged the spatial-only transformations for self-supervised video presentation learning. (2) Temporal-Only Transformations: Misra et al. (2016); Fernando et al. (2017); Lee et al. (2017); Wei et al. (2018) obtained shuffled video frames with the temporal-only transformations and then distinguished whether the shuffled frames are in chronological order. Xu et al. (2019) chose to shuffle video clips instead of frames. Benaim et al. (2020); Yao et al. (2020); Jenni et al. (2020) exploited the speed transformation via determining whether one video clip is accelerated. (3) Spatiotemporal Transformations: There are only a few recent approaches (Ahsan et al., 2019; Kim et al., 2019) that leveraged both spatial and temporal transformations by permuting 3D spatiotemporal cuboids. However, due to the aforementioned
intractability of solving the spatiotemporal jigsaw puzzles, they only leveraged either temporal or spatial permutations as training signals, i.e., they exploited the two domains independently. Therefore, no true spatiotemporal permutations have been considered in Ahsan et al. (2019); Kim et al. (2019). In contrast, given that both spatial appearances and temporal relations are important cues for video representation learning, the focus of this work is on investigating how to exploit the spatial and temporal continuity jointly for self-supervised video presentation learning. To that end, our Constrained Spatiotemporal Jigsaw (CSJ) presents the first spatiotemporal continuity based pretext task for video SSL, thanks to a novel constrained 3D jigsaw and four surrogate tasks to reason about the continuity in the 3D jigsaw puzzles without solving them directly.
Self-supervised Learning with Contrastive Learning Contrastive learning is another selfsupervised learning approach that has become increasingly popular in the image domain (Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020). Recently, it has been incorporated into video SSL as well. Contrastive learning and transformation based pretext tasks are orthogonal to each other and often combined in that different transformed versions of a data sample form the positive set used in contrastive learning. In El-Nouby et al. (2019); Knights et al. (2020); Qian et al. (2020); Wang et al. (2020); Yang et al. (2020), the positive/negative samples were generated based on temporal transformations only. In contrast, some recent works (Han et al., 2019; 2020; Zhuang et al., 2020) leveraged features from the future frame embeddings or with the memory bank (Wu et al., 2018). They modeled spatiotemporal representations using only contrastive learning without transformations. Contrastive learning is also exploited in one of our surrogate pretext tasks. Different from existing works, we explore the spatiotemporal transformations in the form of CSJ and employ contrastive learning to distinguish different levels of spatiotemporal continuity in shuffled jigsaws. This enables us to learn more discriminative spatiotemporal representations.
3 CONSTRAINED SPATIOTEMPORAL JIGSAW
3.1 PROBLEM DEFINITION
The main goal of self-supervised video representation learning is to learn a video feature representation function f(·) without using any human annotations. A general approach to achieving this goal is to generate a supervisory signal y from an unlabeled video clip x and construct a pretext task P to predict y from f(x). The process of solving the pretext task P encourages f(·) to learn discriminative spatiotemporal representations.
The pretext task P is constructed typically by applying to a video clip a transformation function t(·;θ) parameterized by θ and then automatically deriving y from θ, e.g., y can be the type of the transformation. Based on this premise, P is defined as the prediction of y using the feature map of the transformed video clip f(x̃), i.e., P : f(x̃) → y, where x̃ = t(x;θ). For example, in Lee et al. (2017), t(·;θ) denotes a temporal transformation that permutes the four frames of video clip x in a temporal order θ, x̃ = t(x;θ) is the shuffled clip, and the pseudo-label y is defined as the permutation order θ (e.g., 1324, 4312, etc.). The pretext task P is then a classification problem of 24 categories because there are 4! = 24 possible orders.
3.2 CONSTRAINED PERMUTATIONS
Solving spatiotemporal video jigsaw puzzles seems to be an ideal pretext task for learning discriminative representation as it requires an understanding of spatiotemporal continuity. After shuffling the pixels in a video clip using a 3D permutation matrix, the pretext task is to recover the permutation matrix. However, as explained earlier, this task is intractable given even moderate video clip sizes. Our solution is to introduce constraints on the permutations. As a result, a new pretext task PCSJ based on Constrained Spatiotemporal Jigsaw (see Fig. 2(a)) is formulated, which is much easier to solve than a random/unconstrained jigsaw.
Specifically, our goal is to introduce constraints to the permutations so that the resultant shuffled video clip is guaranteed to have large continuous cuboids (see Fig. 2(a)). Similar to humans (Stringer et al., 2006), having large continuous cuboids is key for a model to understand a 3D jigsaw and therefore to have any chance to solve it. Formally, the volume of a shuffled video clip x̃ are denoted as {T,H,W}, measuring its sizes along the temporal, height, and width dimensions, respectively. A cuboid is defined as a crop of x̃: c = x̃t1:t2,h1:h2,w1:w2 , where t1, t2 ∈ {1, 2, . . . , T}, h1, h2 ∈
{1, 2, . . . ,H}, w1, w2 ∈ {1, 2, . . . ,W}. If all the jigsaw pieces (smallest video clip unit, e.g. a pixel or a 3D pixel block) in c keep the same relative order as they were in x (before being shuffled), we call the cuboid c as a continuous cuboid ccont. The cuboid’s volume equals (t2 − t1)× (h2 − h1)× (w2 − w1), and the largest continuous cuboid (LCC) ccontmax is the ccont with the largest volume. We introduce two permutation strategies to ensure that the volumes of LCCs are large in relation to the whole video clip volume after our shuffling transformation t(·;θCSJ). First, instead of shuffling x in three spatiotemporal dimensions simultaneously, t(·;θCSJ) factorizes the permutations into the three spatiotemporal dimensions and then utilizes them sequentially to generate shuffled clips, e.g., in the order of T,W,H and only once. Note that the volume of the generated x̃ stays the same with different permutation orders (e.g., TWH and HTW ). Second, we shuffle a group of jigsaw pieces together instead of each piece individually along each dimension. Taking spatial shuffling as an example, if there are 8 pieces per frame (along each of the two spatial dimensions), θCSJ could be represented as the permutation from {12345678} to {84567123}. The longest and the secondlongest index ranges are: [2, 5] for coordinates {4567}, and [6, 8] for coordinates {123}. With these two permutation strategies, not only do we have large LCCs, but also they are guaranteed to have clearly separable boundaries (see Fig. 2(b)) with surrounding pieces due to the factorized and grouped permutation design. This means that they are easily detectable.
3.3 SURROGATE TASKS
Having permutation constraints preserves more spatiotemporal continuity in the shuffled clip and reduces the amount of possible permutations. But exploiting these constraints to make a neural sorting model tractable is still far from trivial. Instead of solving the jigsaw directly, our PCSJ is thus formulated as four surrogate tasks: Largest Continuous Cuboid Detection (LCCD), Clip Shuffling Pattern Classification (CSPC), Contrastive Learning over Shuffled Clips (CLSC), and Clip Continuity Measure Regression (CCMR). As illustrated in Fig. 2(b), given an unlabeled clip x, we first construct a mini-batch of 8 clips {x̃1, x̃2, ..., x̃8} by shuffling x with different but related constrained permutations (to be detailed later). These shuffled clips and the raw clip x are then fed into a 3D CNN model f(·) for spatiotemporal representation learning with a non-local operation (Wang et al., 2018):
fNL(x̃i) = NL(f(x̃i), f(x)), (1)
where NL(·, ·) denotes the non-local operator, and f(x̃i) and f(x) denote the feature map of x̃i and x from the last convolutional layer of f(·), respectively. The resultant feature map fNL(x̃i) is further passed through a spatial pooling layer followed by a separately fully-connected layer for
each surrogate task. Note that the raw video feature map f(x) is used as guidance through the nonlocal based attention mechanism to help fulfill the tasks. This is similar to humans needing to see the completed jigsaw picture to help solve the puzzle.
Before we detail the four tasks, we first explain how the eight permutations from the same raw clip are generated. First, the factorized and grouped permutations are applied to x to create one shuffled clip. By examining the largest and the second-largest continuous puzzle piece numbers of each dimension ({T,H,W}), we can easily identify the top-2 largest continuous cuboids (LCCs). Next, by varying the relative order of the top-2 LCCs either in the correct (original) order or the reverse order in each dimension, 2×2×2=8 permutations are obtained. By controlling the group size in permutation, we can make sure that the top-2 LCCs account for a large proportion, saying 80% of the total clip volume. Our four tasks are thus centered around these two LCCs as they largely determine the overall spatiotemporal continuity of the shuffled clip.
The first task LCCD is to locate the top-2 LCCs {ccontmax(j) : j = 1, 2} and formulated as a regression problem. Given a ground-truth LCC ccontmax(j), a Gaussian kernel is applied to its center to depict the possibility of each pixel in x̃ belonging to the LCC. This leads to a soft mask M jLCCD with the same
size of x̃: M jLCCD is all 0 outside the region of c cont max(j), and exp(−
||a− ac||2
2σ2g ) inside the region,
where a,ac denote any pixel and the center point, respectively. σg is the hyper-parameter which is set as 1 empirically. In the training stage, FPN (Lin et al., 2017) is used for multi-level feature fusion. LCCD is optimized using the MSE loss in each point:
LLCCD = ∑
j∈{1,2} ∑ a∈x̃ MSE(M jLCCD(a),M j LCCD(a) ′ ), (2)
where MSE(·, ·) denotes the MSE loss function, and M jLCCD(a) ′ is the prediction of each pixel a.
CSPC is designed to recognize the shuffling pattern of a shuffled clip. As mentioned early, the eight shuffled clips in each mini-batch are created from the same raw clip and differ only in the relative order of the top-2 LCCs along each of the three dimensions. There are thus eight permutations depending on the order (correct or reverse) in each dimension. Based on this understanding, CSPC is formulated as a multi-class classification task to recognize each shuffled clip into one of these eight classes, which is optimized using the Cross-Entropy (CE) loss:
LCSPC = ∑
i∈{0,1,...,7}
CE(lCSPC[i], l ′ CSPC[i]), (3)
where CE(·, ·) denotes the CE loss function and l ′
CSPC[i] is the predicted class label of i-th sample (shuffled clip) in each mini-batch.
The two tasks above emphasize on local spatiotemporal continuity understanding. In contrast, CLSC leverages the contrastive loss to encourage global continuity understanding. In particular, since the top-2 LCCs dominate the volume of a clip, it is safe to assume that if their relative order is correct in all three dimensions, the shuffled clip largely preserve continuity compared to the original clip, while all other 7 permutations feature large discontinuity in at least one dimension. We thus form a contrastive learning task with the original video x and the most continuous shuffled video x̃i as a positive pair, and x and the rest x̃j (j 6= i) as negative pairs. CLSC is optimized using the Noise Contrastive Estimation (NCE) (Tian et al., 2020) loss:
LCLSC = −log exp(sim(f(x), f(x̃i))/τ) exp(sim(f(x), f(x̃i))/τ) + ∑ j exp(sim(f(x), f(x̃j))/τ) , (4)
where sim(·, ·) is defined by the dot product: f(x)>f(x̃i), and τ is the temperature hyper-parameter. Note that the non-local operator is not used in CLSC.
CCMR is similar to CLSC in that it also enforces global continuity understanding, but differs in that it is a regression task aimed at predicting a global continuity measure. We consider two such measures. Since the total size of the top-2 LCCs {ccontmax(j) : j = 1, 2} is a good indicator of how continuous a shuffle video clip is, the first measure lld directly measures the relative total size of the top-2 LCCs: lld = v(ccontmax(1)) + v(c cont max(2))
v(x̃) , where v(·) represents the volume of a clip/cuboid.
The second measure lt/h/whd examines the shuffling degree of x̃ in each dimension, computed as the normalized hamming distance: hamming(x̃) Nc(Nc − 1)/2 , where hamming(·) denotes the hamming distance in each dimension between the original piece sequence and the permuted one, and Nc represents the number of pieces in each dimension so thatNc(Nc−1)/2 indicates the maximum possible hamming distance in the dimension. CCMR is optimized using the Mean Squared Error (MSE) loss:
LCCMR = MSE([lld, l t hd, l h hd, l w hd], [l
′ ld, l t′ hd, l h′ hd, l w′ hd ]), (5)
where l ′ ld, l t′ hd, l h′ hd, l w′ hd are the prediction of the model.
3.4 OVERALL LEARNING OBJECTIVE
Our entire CSJ framework is optimized end-to-end with the learning objective defined as:
L = σ1LLCCD + σ2LCSPC + σ3LCLSC + σ4LCCMR, (6)
where σ1, σ2, σ3, σ4 denote the weights for the four losses. We deploy the adaptive weighting mechanism (Kendall et al., 2018) to weight these tasks, and thus there is no free hyper-parameters to tune. We also adopt curriculum learning (Bengio et al., 2009; Korbar et al., 2018) to train our network by shuffling clips from easy to hard. More details are presented in Appendix. A.1 and A.2.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We select three benchmark datasets for performance evaluation: UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and Kinetics-400 (K400) (Kay et al., 2017), containing 13K/7K/306K video clips from 101/51/400 action classes, respectively. In the self-supervised pretraining stage, we utilize the first training split of UCF101/HMDB51 and the training split of K400 without using their labels. As in Han et al. (2020), we adopt R2D3D as the backbone network, which is modified from R3D (Hara et al., 2018) with fewer parameters. By fine-tuning the pre-trained model, we can evaluate the SSL performance on a downstream task (i.e., action classification). Following Han et al. (2019); He et al. (2020), two evaluation protocols are used: comparisons against state-of-the-arts follow the more popular fully fine-tuning evaluation protocol, but ablation analysis takes both the linear evaluation and fully fine-tuning protocols. For the experiments on supervised learning, we report top-1 accuracy on the first test split of UCF101/HMDB51 as the standard (Han et al., 2020). More details of the datasets are provided in Appendix B.
4.2 IMPLEMENTATION DETAILS
Raw videos in these datasets are decoded at a frame rate of 24-30 fps. From each raw video, we start from a randomly selected frame index and sample a consecutive 16-frame video clip with a temporal stride of 4. For data augmentation, we first resize the video frames to 128×171 pixels, from which we extract random crops of size 112×112 pixels. We also apply random horizontal flipping and random color jittering to the video frames during training. We exploit only the raw RGB video frames as input, and do not leverage optical flow or other auxiliary signals for self-supervised pretraining. We adopt the Adam optimizer with a weight decay of 10−3 and a batch size of 8 per GPU (with a total of 32 GPUs). We deploy cosine annealing learning rate with an initial value of 10−4 and 100 epochs. The jigsaw puzzle piece sizes of {T,H,W} dimensions are set as 1, 4, 4, respectively. A 16×112×112 video clip thus contains 16×28×28 pieces. We set the temperature hyper-parameter τ to 0.07. A dropout of 0.5 is applied to the final layer of each task. More implementation details of the fine-tuning and test evaluation stages can be found in Appendix B.
4.3 MAIN RESULTS
Comparison in Action Recognition A standard way to evaluate a self-supervised video representation learning model is to use it to initialize an action recognition model on a small dataset. Specifically, after self-supervised pre-training on UCF101/HMDB51/K400, we exploit the learned backbone for fully fine-tuning on UCF101 and HMDB51, following Han et al. (2020); Wang et al. (2020).
We consider one baseline: fully-supervised learning with pre-training on K400. Note that this baseline is commonly regarded as the upper bound of self-supervised representation learning (Alwassel et al., 2019). From Table 1, we have the following observations: (1) Our CSJ achieves state-of-theart performance on both UCF101 and HMDB51. Particularly, with the backbone R2D3D-18 that is weaker than R(2+1)D-18, our CSJ performs comparably w.r.t. Pace on UCF101 but achieves a 10% improvement over Pace on HMDB51. (2) By exploiting spatiotemporal transformations for self-supervised representation learning, our CSJ beats either methods with only temporal transformations (†) or methods with both spatial and temporal transformations (‡), as well as those learning spatiotemporal representations (∗) via only contrastive learning (w./o. spatiotemporal transformations). (3) Our CSJ also outperforms CBT (Sun et al., 2019), which used ten-times more massive datasets (K600 (Carreira et al., 2018) + Howto100M (Miech et al., 2019)) and multiple modalities (RGB+Audio). (4) Our CSJ is the closest to the fully-supervised one (upper bound), validating its effectiveness in self-supervised video representation learning.
Comparison in Video Retrieval We evaluate our CSJ method in the video retrieval task. Following Xu et al. (2019), we extract each video clips’ embeddings with the pre-training model and use each clip in the test set to query the k nearest clips in the training set. The comparative results in Table 2 show that our method outperforms all other self-supervised methods and achieves new state-of-the-art in video retrieval on UCF101. Particularly, our method beats the latest competitor
Tasks Linear Probe Fully Fine-tuning
PRP (Yao et al., 2020) on four out of five metrics. This indicates that our proposed CSJ is also effective for video representation learning in video retrieval.
4.4 FURTHER EVALUATIONS
Ablation Study We conduct ablative experiments to validate the effectiveness of four CSJ surrogate tasks and two additional learning strategies. From Table 3, we can observe that: (1) Selfsupervised learning with each of the four tasks shows better generalization than fine-tuning the network from scratch (random initialization). (2) By training over all the four tasks jointly, we can achieve large performance gains (see ‘+LCCD’ vs. ‘CCMR’). (3) Each additional learning strategy (i.e., adaptive weighting or curriculum learning) leads to a small boost to the performance by 0.3- 0.5%. (4) Our full model achieves a remarkable classification accuracy of 70.4%, demonstrating the effectiveness of our proposed CSJ with only the RGB video stream (without additional optical flow, audio, or text modalities). More ablative analysis can be found in Appendix D.
Visualization of Attention Maps Fig. 3 visualizes the attention map of the last feature maps from two models fine-tuned on UCF101 with or without adopting our self-supervised pre-training. Since each frame’s attention map involves four adjacent frames, it actually contains spatiotemporal semantic features. We can see that our self-supervised pre-training with CSJ indeed helps to better capture meaningful spatiotemporal information and thus recognize the action categories more correctly.
Visualization of LCCD Predictions We also demonstrate the visualization of the LCCD predictions from the pre-trained models in Fig. 4. We can observe that solving the LCCD task indeed enables the model to learn the locations of LCCs and understand spatiotemporal continuity, which is a key step towards video content understanding.
5 CONCLUSION
We have introduced a novel self-supervised video representation learning method named Constrained Spatiotemporal Jigsaw (CSJ). By introducing constrained permutations, our proposed CSJ is the first to leverage spatiotemporal jigsaw in self-supervised video representation learning. We also propose four surrogate tasks based on our constrained spatiotemporal jigsaws. They are designed to encourage a video representation model to understand the spatiotemporal continuity, a key building block towards video content analysis. Extensive experiments were carried out to validate the effectiveness of each of the four CSJ tasks and also show that our approach achieves the state-of-the-art on two downstream tasks across various benchmarks.
A ADDITIONAL LEARNING STRATEGIES
A.1 ADAPTIVE WEIGHT
Formally, our CSJ has two continuous outputs y1, y4 from LCCD and CCMR, and two discrete outputs y2, y3 from CSPC and CLSC, modeled with Gaussian likelihoods and softmax likelihoods, respectively. The joint loss for these four tasks L(W, σ1, σ2, σ3, σ4) is:
L(W, σ1, σ2, σ3, σ4)
= − logN (y1; fW(x), σ21) · − logN (y4; fW(x), σ24)
· softmax(y2=c; fW(x), σ2) · softmax(y3=c; fW(x), σ3)
= 1
2σ21 ||y1 − fW(x)||2 + log σ1 −
1
2σ24 ||y4 − fW(x)||2 + log σ4
− log p(y2|fW(x), σ2)− log p(y3|fW(x), σ3)
≈ 1 2σ21 L1(W) + 1 σ22 L2(W) + 1 σ23 L3(W) + 1 2σ24 L4(W)
+ log σ1 + log σ2 + log σ3 + log σ4,
(7)
where σ is the weight factor that can be automatically learned from the network, and the log likelihood for the output y is defined as:
log p(y = c|fW(x), σ) = 1 σ2 fWc (x)−log ∑ c′ exp( 1 σ2 fWc′ (x)). (8)
A.2 CURRICULUM LEARNING
We adopt curriculum learning (Korbar et al., 2018) to train our network by shuffling clips from easy to hard. Let d be the shuffle degree of a shuffled clip x̃, representing the number of continuous cuboids in each dimension. We gradually increase d from 3 to 5 during the training phase to produce more permuted clips. Note that when the video content is ambiguous in one dimension, e.g., a static video clip inflated from an image, there is no temporal variance to learn the transformation. Kim et al. (2019); Noroozi & Favaro (2016) also mentioned this problem as similar-looking ambiguity. To solve this problem, we calculate the variance on each dimension and set a threshold. If the variance is lower than the threshold, we decrease d from 3 to 1 so that the pieces are not shuffled in the corresponding dimension.
B DATASETS AND IMPLEMENTATION
B.1 DETAILS OF DATASETS
UCF101 (Soomro et al., 2012) is a widely-used dataset in the action recognition task, which contains 13,320 videos with 101 action classes. The dataset is divided into three training/testing splits. In this paper, following prior works (Wang et al., 2020; Han et al., 2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
HMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits. Following Wang et al. (2020); Han et al. (2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
Kinetics-400 (K400) (Kay et al., 2017) is a very large action recognition dataset consisting of 400 human action classes and around 306k videos. In this work, we use the training split of K400 as the pre-training dataset.
B.2 IMPLEMENTATION DETAILS
In the fine-tuning stage, weights of convolutional layers are initialized with self-supervised pretraining, but weights of fully-connected layers are randomly initialized. The whole network is then trained with the cross-entropy loss. The pre-processing and training strategies are the same as in the
self-supervised pre-training stage, except that the total epochs are 300 and the initial learning rate is 10−3. We use a batch size of 64 per GPU and a total of 8 GPUs for fine-tuning.
We follow the standard evaluation protocol (Han et al., 2020) during inference and use ten-crop to take the same sequence length as training from the video. The predicted label of each video is calculated by averaging the softmax probabilities of all clips in the video.
C NETWORK ARCHITECTURE
We deploy the same network backbone R2D3D as Han et al. (2019; 2020), which is a 3D-ResNet (R3D) similar to Hara et al. (2018). The only difference between R2D3D and R3D lies in that: R2D3D keeps the first two residual blocks as 2D convolutional blocks while R3D uses 3D blocks. Therefore, the modified R2D3D has fewer parameters (only the last two blocks are 3D convolutions). We present the CNN structure of R2D3D in Table 4.
D ADDITIONAL ABLATION STUDIES
D.1 LCCD
Instead of predicting center points using the detection method, we also design a segmentation method – largest continuous cuboid segmentation (LCCS) to predicts the location of top-2 LCCs {ccontmax(j) : j = 1, 2}. The difference between LCCD and LCCS lies in that: LCCS is formulated as a segmentation task to discriminate whether a pixel is in the region of ccontmax(j). Concretely, LCCS predicts a binary mask M jLCCS where only points in the region of {c cont max(j) are set to be 1, otherwise 0. As a result, LCCS is optimized using the Cross Entropy (CE) loss at each point:
LLCCS = ∑
j∈{1,2} ∑ a∈x̃ CE(M jLCCS(a),M j LCCS(a) ′ ), (9)
where CE(·, ·) denotes the CE loss function, and M jLCCS(a) ′ is the predicted class of pixel a.
We report the performance of four different designs of LCCD in Table 5: (1) LCCS: LCCS is used instead of LCCD. (2) LCCD+MLCCS: The Gaussian mask MLCCD is substituted by the binary mask MLCCS, but the LCCD task is optimized using the MSE loss. (3) LCCD + L1: The LCCD task is
optimized by the L1 loss. (4) LCCD + MSE: The LCCD task is optimized by the MSE loss. From Table 5, it can be seen that the segmentation task also helps self-supervised representation learning but doesn’t perform as well as LCCD. Also, under the three different settings of LCCD, the MSE loss with the Gaussian map performs the best.
D.2 CLSC
Table 6 above shows the accuracies obtained with different temperatures τ used in contrastive learning. We can observe that: (1) When τ is in the range 1 ∼ 0.07, the accuracy increases with smaller τ . (2) When τ is large (e.g., 1), the accuracy drops considerably. In this work, τ is set to 0.0.
D.3 CSPC
In addition to our CSPC with 8 pattern categories (see Sec. 3.3), we consider another two designs: (1) 2 Categories: the shuffled clip is discriminated by whether it has the same relative order of the top-2 LCCs as the raw clip. It is almost the same as CLSC but is optimized by the CE loss. (2) 4 Categories: the shuffled clip is discriminated by how it differs from the raw clip: non-difference, spatial-only difference, temporal-only difference, spatiotemporal difference. From Table 7, we can see that CSPC with 8 categories outperforms the other two designs. These results support our motivation for leveraging spatiotemporal transformations.
D.4 CCMR
We report the performance of three different designs of CCMR: (1) ld: the learning degree lld is used as supervision, which only contains volume information. (2) hd: the hamming distances lthd, l h hd, l w hd are used, which contain only the relative order information. (3) ld + hd: both ld and hd are used as supervision. From Table 8, we can see that: First, both ld and hd help the model to learn continuous characteristics during pre-training, and hd outperforms ld by a small margin. Second, our CCMR learns the best representation by combining ld and hd.
D.5 RESULTS OF DIRECTLY SOLVING CSJ
We also demonstrate the results of solving the CSJ task directly in Table 9. We randomly shuffle video clips into 4 × 4 × 4 jigsaw puzzles. To recognize the correct permutation, the model solve a (4! × 4! × 4!)-way classification task in the pre-training stage. We compare the CSJ task with the joint LCCD+CCMR task under the same setting for fair comparison. Linear evaluation is adopted to show the effectiveness of different tasks. We can observe from the table that solving LCCD+CCMR jointly is more effective than solving CSJ directly.
E TEMPORAL ACTION SEGMENTATION
To show the effectiveness of our CSJ for solving new downstream tasks, we apply the pretrained model obtained by our CSJ to temporal action segmentation, which is more challenging than the
conventional action recognition and retrieval tasks. Specifically, we choose to compare our CSJ model with the latest competitor MemDPC (Han et al., 2020) on the Breakfast dataset (Kuehne et al., 2014). For fair comparison, our CSJ model and the MemDPC model adopt the same R2D3D34 backbone. Due the time constraint, from the original Breakfast dataset, we only use a small subset of 200 long videos as the training set for fine-tuning, and select a few long videos for the test. For temporal action segmentation, we follow the overall framework of MS-TCN (Abu Farha & Gall, 2019), but changes its backbone to R2D3D-34 pretrained by our CSJ or MemDPC.
We present the qualitative results on two test videos in Fig. 5. We can clearly observe that our CSJ outperforms MemDPC on both test videos. Particularly, the predictions of our CSJ are much closer to the ground truth, but MemDPC tends to produce unwanted segments for temporal action segmentation: it wrongly recognizes the segment (color in yellow) in the middle part of the first video as ‘Pour Milk’, and the segment (color in black) in the last part of the second video as ‘Stir Coffee’. In conclusion, as compared to the latest SSVRL method MemDPC, our CSJ can learn more robust features for temporal action segmentation due to its ‘true’ spatiotemporal jigsaw understanding. | 1. What is the main contribution of the paper regarding the 3D jigsaw puzzle problem?
2. What are the strengths of the proposed approach, particularly in terms of its ability to incorporate temporal and spatial information?
3. What are the weaknesses of the paper, especially regarding the experimental design and analysis?
4. How does the reviewer assess the clarity, quality, novelty, and significance of the paper's content?
5. Are there any specific suggestions or recommendations for improving the paper, such as conducting additional experiments or providing more detailed explanations? | Review | Review
Summary and contributions: The authors propose an approach for solving a constraint version of the 3D (space+time) jigsaw puzzle over a video, using four easier surrogate tasks. The four surrogate tasks are diverse, having different formulations: regression (LCCD, CCMR), classification (CSPC), and noise contrasting (CLSC) problems. All of them are learned together, using a joint learning objective.
The authors show the value of the newly learned representations in the self-supervised scenario on two downstream tasks (video action recognition and video retrieval), achieving state-of-the-art results compared with new methods.
Strengths:
The paper comes with a solution for expanding a classical self-supervised 2D problem formulation to 3D, proposing a tractable approach by breaking it into four tasks and imposing constraints through them.
A good amount of details on the method and on how the permutations and the surrogate tasks were chosen.
The fact that even though the permutations are heavily constrained, they proved to be useful.
The newly learned representations, that embed the temporal and spatial continuity aspects, achieve state-of-the-art results on two video tasks.
Weaknesses:
The temporal aspect is not sufficiently highlighted in the experiments:
An ablation study to better distinguish between the spatial and spatiotemporal representations, with both quantitative and qualitative experiments would strengthen the submission.
It would be interesting to see how much the quantity of temporal information reflects in the performance (ablation on the number of used frames)
Quality: The idea is simple but complex enough to generate valuable representations, having the temporal aspect integrated. The paper is technically sound.
Clarity: The paper is clearly written and easy to read and follow.
Novelty: The overall idea is not novel, but the way it is implemented and the proposed constraints are novel.
Significance of this work: This work is relevant for the field, it incrementally advances the current integration of the temporal and spatial aspects in video.
Typos: The 3rd vertical line in Table 3 should be shifted with 1 column. |
ICLR | Title
Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw
Abstract
This paper proposes a novel pretext task for self-supervised video representation learning by exploiting spatiotemporal continuity in videos. It is motivated by the fact that videos are spatiotemporal by nature and a representation learned to detect spatiotemporal continuity/discontinuity is thus beneficial for downstream video content analysis tasks. A natural choice of such a pretext task is to construct spatiotemporal (3D) jigsaw puzzles and learn to solve them. However, this task turns out to be intractable. We thus propose Constrained Spatiotemporal Jigsaw (CSJ) whereby the 3D jigsaws are formed in a constrained manner to ensure that large continuous spatiotemporal cuboids exist in a shuffled clip to provide sufficient cues for the model to reason about the continuity. With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Extensive experiments show that our CSJ achieves state-of-the-art on two downstream tasks across various benchmarks.
1 INTRODUCTION
Self-supervised learning (SSL) has achieved tremendous successes recently for static images (He et al., 2020; Chen et al., 2020) and shown to be able to outperform supervised learning on a wide range of downstream image understanding tasks. However, such successes have not yet been reproduced for videos. Since different SSL models differ mostly on the pretext tasks employed on the unlabeled training data, designing pretext tasks more suitable for videos is the current focus for self-supervised video representation learning (Han et al., 2020; Wang et al., 2020).
Videos are spatiotemporal data and spatiotemporal analysis is the key to many video content understanding tasks. A good video representation learned from the self-supervised pretext task should therefore capture discriminative information jointly along both spatial and temporal dimensions. It is thus somewhat counter-intuitive to note that most existing SSL pretext tasks for videos do not explicitly require joint spatiotemporal video understanding. For example, some spatial pretext tasks have been borrowed from images without any modification (Jing et al., 2018), ignoring the temporal dimension. On the other hand, many recent video-specific pretext tasks typically involve speed or temporal order prediction (Lee et al., 2017; Wei et al., 2018; Benaim et al., 2020; Wang et al., 2020), i.e., operating predominately along the temporal axis.
A natural choice for a spatiotemporal pretext task is to solve 3D jigsaw puzzles, whose 2D counterpart has been successfully used for images (Noroozi & Favaro, 2016). Indeed, solving 3D puzzles requires the learned model to understand spatiotemporal continuity, a key step towards video content understanding. However, directly solving a 3D puzzle turns out to be intractable: a puzzle of 3×3×3 pieces (the same size as a Rubik’s cube) can have 27! possible permutations. Video volume even in a short clip is much larger than that. Nevertheless, the latest neural sorting models (Paumard et al., 2020; Du et al., 2020) can only handle permutations a few orders of magnitude less, so offer no solution. This is hardly surprising because such a task is daunting even for humans: Most people would struggle with a standard Rubik’s cube, let alone a much larger one.
In this paper, we propose a novel Constrained Spatiotemporal Jigsaw (CSJ) pretext task for selfsupervised video representation learning. The key idea is to form 3D jigsaw puzzles in a constrained manner so that it becomes solvable. This is achieved by factorizing the permutations (shuffling)
into the three spatiotemporal dimensions and then applying them sequentially. This ensures that for a given video clip, large continuous spatiotemporal cuboids exist after the constrained shuffling to provide sufficient cues for the model to reason about spatiotemporal continuity (see Fig. 1(b)(c)). Such large continuous cuboids are also vital for human understanding of video as revealed in neuroscience and visual studies (Stringer et al., 2006; Chen et al., 2019). Even with the constrained puzzles, solving them directly could still be extremely hard. Consequently, instead of directly solving the puzzles (i.e., recovering the permutation matrix so that each piece can be put back), four surrogate tasks are carefully designed. They are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Concretely, given a video clip shuffled with our constrained permutations, we make sure that the top-2 largest continuous cuboids (LCCs) dominate the clip volume. The level of continuity in the shuffle clip as a whole is thus determined mainly by the volumes of these LCCs, and whether they are at the right order (see Fig. 1(d)(e)) both spatially and temporally. Our surrogate tasks are thus designed to locate these LCCs and predict their order so that the model learned with these tasks can be sensitive to spatiotemporal continuity both locally and globally.
Our main contributions are three-fold: (1) We introduce a new pretext task for self-supervised video representation learning called Constrained Spatiotemporal Jigsaw (CSJ). To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding. (2) We propose a novel constrained shuffling method to construct easy 3D jigsaws containing large LCCs. Four surrogate tasks are then formulated in place of the original jigsaw solving tasks. They are much more solvable yet remain effective in learning spatiotemporal discriminative representations. (3) Extensive experiments show that our approach achieves state-ofthe-art on two downstream tasks across various benchmarks.
2 RELATED WORK
Self-supervised Learning with Pretext Tasks Self-supervised learning (SSL) typically employs a pretext task to generate pseudo-labels for unlabeled data via some forms of data transformation. According to the transformations used by the pretext task, existing SSL methods for video presentation learning can be divided into three categories: (1) Spatial-Only Transformations: Derived from the original image domain (Gidaris et al., 2018), Jing et al. (2018) leveraged the spatial-only transformations for self-supervised video presentation learning. (2) Temporal-Only Transformations: Misra et al. (2016); Fernando et al. (2017); Lee et al. (2017); Wei et al. (2018) obtained shuffled video frames with the temporal-only transformations and then distinguished whether the shuffled frames are in chronological order. Xu et al. (2019) chose to shuffle video clips instead of frames. Benaim et al. (2020); Yao et al. (2020); Jenni et al. (2020) exploited the speed transformation via determining whether one video clip is accelerated. (3) Spatiotemporal Transformations: There are only a few recent approaches (Ahsan et al., 2019; Kim et al., 2019) that leveraged both spatial and temporal transformations by permuting 3D spatiotemporal cuboids. However, due to the aforementioned
intractability of solving the spatiotemporal jigsaw puzzles, they only leveraged either temporal or spatial permutations as training signals, i.e., they exploited the two domains independently. Therefore, no true spatiotemporal permutations have been considered in Ahsan et al. (2019); Kim et al. (2019). In contrast, given that both spatial appearances and temporal relations are important cues for video representation learning, the focus of this work is on investigating how to exploit the spatial and temporal continuity jointly for self-supervised video presentation learning. To that end, our Constrained Spatiotemporal Jigsaw (CSJ) presents the first spatiotemporal continuity based pretext task for video SSL, thanks to a novel constrained 3D jigsaw and four surrogate tasks to reason about the continuity in the 3D jigsaw puzzles without solving them directly.
Self-supervised Learning with Contrastive Learning Contrastive learning is another selfsupervised learning approach that has become increasingly popular in the image domain (Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020). Recently, it has been incorporated into video SSL as well. Contrastive learning and transformation based pretext tasks are orthogonal to each other and often combined in that different transformed versions of a data sample form the positive set used in contrastive learning. In El-Nouby et al. (2019); Knights et al. (2020); Qian et al. (2020); Wang et al. (2020); Yang et al. (2020), the positive/negative samples were generated based on temporal transformations only. In contrast, some recent works (Han et al., 2019; 2020; Zhuang et al., 2020) leveraged features from the future frame embeddings or with the memory bank (Wu et al., 2018). They modeled spatiotemporal representations using only contrastive learning without transformations. Contrastive learning is also exploited in one of our surrogate pretext tasks. Different from existing works, we explore the spatiotemporal transformations in the form of CSJ and employ contrastive learning to distinguish different levels of spatiotemporal continuity in shuffled jigsaws. This enables us to learn more discriminative spatiotemporal representations.
3 CONSTRAINED SPATIOTEMPORAL JIGSAW
3.1 PROBLEM DEFINITION
The main goal of self-supervised video representation learning is to learn a video feature representation function f(·) without using any human annotations. A general approach to achieving this goal is to generate a supervisory signal y from an unlabeled video clip x and construct a pretext task P to predict y from f(x). The process of solving the pretext task P encourages f(·) to learn discriminative spatiotemporal representations.
The pretext task P is constructed typically by applying to a video clip a transformation function t(·;θ) parameterized by θ and then automatically deriving y from θ, e.g., y can be the type of the transformation. Based on this premise, P is defined as the prediction of y using the feature map of the transformed video clip f(x̃), i.e., P : f(x̃) → y, where x̃ = t(x;θ). For example, in Lee et al. (2017), t(·;θ) denotes a temporal transformation that permutes the four frames of video clip x in a temporal order θ, x̃ = t(x;θ) is the shuffled clip, and the pseudo-label y is defined as the permutation order θ (e.g., 1324, 4312, etc.). The pretext task P is then a classification problem of 24 categories because there are 4! = 24 possible orders.
3.2 CONSTRAINED PERMUTATIONS
Solving spatiotemporal video jigsaw puzzles seems to be an ideal pretext task for learning discriminative representation as it requires an understanding of spatiotemporal continuity. After shuffling the pixels in a video clip using a 3D permutation matrix, the pretext task is to recover the permutation matrix. However, as explained earlier, this task is intractable given even moderate video clip sizes. Our solution is to introduce constraints on the permutations. As a result, a new pretext task PCSJ based on Constrained Spatiotemporal Jigsaw (see Fig. 2(a)) is formulated, which is much easier to solve than a random/unconstrained jigsaw.
Specifically, our goal is to introduce constraints to the permutations so that the resultant shuffled video clip is guaranteed to have large continuous cuboids (see Fig. 2(a)). Similar to humans (Stringer et al., 2006), having large continuous cuboids is key for a model to understand a 3D jigsaw and therefore to have any chance to solve it. Formally, the volume of a shuffled video clip x̃ are denoted as {T,H,W}, measuring its sizes along the temporal, height, and width dimensions, respectively. A cuboid is defined as a crop of x̃: c = x̃t1:t2,h1:h2,w1:w2 , where t1, t2 ∈ {1, 2, . . . , T}, h1, h2 ∈
{1, 2, . . . ,H}, w1, w2 ∈ {1, 2, . . . ,W}. If all the jigsaw pieces (smallest video clip unit, e.g. a pixel or a 3D pixel block) in c keep the same relative order as they were in x (before being shuffled), we call the cuboid c as a continuous cuboid ccont. The cuboid’s volume equals (t2 − t1)× (h2 − h1)× (w2 − w1), and the largest continuous cuboid (LCC) ccontmax is the ccont with the largest volume. We introduce two permutation strategies to ensure that the volumes of LCCs are large in relation to the whole video clip volume after our shuffling transformation t(·;θCSJ). First, instead of shuffling x in three spatiotemporal dimensions simultaneously, t(·;θCSJ) factorizes the permutations into the three spatiotemporal dimensions and then utilizes them sequentially to generate shuffled clips, e.g., in the order of T,W,H and only once. Note that the volume of the generated x̃ stays the same with different permutation orders (e.g., TWH and HTW ). Second, we shuffle a group of jigsaw pieces together instead of each piece individually along each dimension. Taking spatial shuffling as an example, if there are 8 pieces per frame (along each of the two spatial dimensions), θCSJ could be represented as the permutation from {12345678} to {84567123}. The longest and the secondlongest index ranges are: [2, 5] for coordinates {4567}, and [6, 8] for coordinates {123}. With these two permutation strategies, not only do we have large LCCs, but also they are guaranteed to have clearly separable boundaries (see Fig. 2(b)) with surrounding pieces due to the factorized and grouped permutation design. This means that they are easily detectable.
3.3 SURROGATE TASKS
Having permutation constraints preserves more spatiotemporal continuity in the shuffled clip and reduces the amount of possible permutations. But exploiting these constraints to make a neural sorting model tractable is still far from trivial. Instead of solving the jigsaw directly, our PCSJ is thus formulated as four surrogate tasks: Largest Continuous Cuboid Detection (LCCD), Clip Shuffling Pattern Classification (CSPC), Contrastive Learning over Shuffled Clips (CLSC), and Clip Continuity Measure Regression (CCMR). As illustrated in Fig. 2(b), given an unlabeled clip x, we first construct a mini-batch of 8 clips {x̃1, x̃2, ..., x̃8} by shuffling x with different but related constrained permutations (to be detailed later). These shuffled clips and the raw clip x are then fed into a 3D CNN model f(·) for spatiotemporal representation learning with a non-local operation (Wang et al., 2018):
fNL(x̃i) = NL(f(x̃i), f(x)), (1)
where NL(·, ·) denotes the non-local operator, and f(x̃i) and f(x) denote the feature map of x̃i and x from the last convolutional layer of f(·), respectively. The resultant feature map fNL(x̃i) is further passed through a spatial pooling layer followed by a separately fully-connected layer for
each surrogate task. Note that the raw video feature map f(x) is used as guidance through the nonlocal based attention mechanism to help fulfill the tasks. This is similar to humans needing to see the completed jigsaw picture to help solve the puzzle.
Before we detail the four tasks, we first explain how the eight permutations from the same raw clip are generated. First, the factorized and grouped permutations are applied to x to create one shuffled clip. By examining the largest and the second-largest continuous puzzle piece numbers of each dimension ({T,H,W}), we can easily identify the top-2 largest continuous cuboids (LCCs). Next, by varying the relative order of the top-2 LCCs either in the correct (original) order or the reverse order in each dimension, 2×2×2=8 permutations are obtained. By controlling the group size in permutation, we can make sure that the top-2 LCCs account for a large proportion, saying 80% of the total clip volume. Our four tasks are thus centered around these two LCCs as they largely determine the overall spatiotemporal continuity of the shuffled clip.
The first task LCCD is to locate the top-2 LCCs {ccontmax(j) : j = 1, 2} and formulated as a regression problem. Given a ground-truth LCC ccontmax(j), a Gaussian kernel is applied to its center to depict the possibility of each pixel in x̃ belonging to the LCC. This leads to a soft mask M jLCCD with the same
size of x̃: M jLCCD is all 0 outside the region of c cont max(j), and exp(−
||a− ac||2
2σ2g ) inside the region,
where a,ac denote any pixel and the center point, respectively. σg is the hyper-parameter which is set as 1 empirically. In the training stage, FPN (Lin et al., 2017) is used for multi-level feature fusion. LCCD is optimized using the MSE loss in each point:
LLCCD = ∑
j∈{1,2} ∑ a∈x̃ MSE(M jLCCD(a),M j LCCD(a) ′ ), (2)
where MSE(·, ·) denotes the MSE loss function, and M jLCCD(a) ′ is the prediction of each pixel a.
CSPC is designed to recognize the shuffling pattern of a shuffled clip. As mentioned early, the eight shuffled clips in each mini-batch are created from the same raw clip and differ only in the relative order of the top-2 LCCs along each of the three dimensions. There are thus eight permutations depending on the order (correct or reverse) in each dimension. Based on this understanding, CSPC is formulated as a multi-class classification task to recognize each shuffled clip into one of these eight classes, which is optimized using the Cross-Entropy (CE) loss:
LCSPC = ∑
i∈{0,1,...,7}
CE(lCSPC[i], l ′ CSPC[i]), (3)
where CE(·, ·) denotes the CE loss function and l ′
CSPC[i] is the predicted class label of i-th sample (shuffled clip) in each mini-batch.
The two tasks above emphasize on local spatiotemporal continuity understanding. In contrast, CLSC leverages the contrastive loss to encourage global continuity understanding. In particular, since the top-2 LCCs dominate the volume of a clip, it is safe to assume that if their relative order is correct in all three dimensions, the shuffled clip largely preserve continuity compared to the original clip, while all other 7 permutations feature large discontinuity in at least one dimension. We thus form a contrastive learning task with the original video x and the most continuous shuffled video x̃i as a positive pair, and x and the rest x̃j (j 6= i) as negative pairs. CLSC is optimized using the Noise Contrastive Estimation (NCE) (Tian et al., 2020) loss:
LCLSC = −log exp(sim(f(x), f(x̃i))/τ) exp(sim(f(x), f(x̃i))/τ) + ∑ j exp(sim(f(x), f(x̃j))/τ) , (4)
where sim(·, ·) is defined by the dot product: f(x)>f(x̃i), and τ is the temperature hyper-parameter. Note that the non-local operator is not used in CLSC.
CCMR is similar to CLSC in that it also enforces global continuity understanding, but differs in that it is a regression task aimed at predicting a global continuity measure. We consider two such measures. Since the total size of the top-2 LCCs {ccontmax(j) : j = 1, 2} is a good indicator of how continuous a shuffle video clip is, the first measure lld directly measures the relative total size of the top-2 LCCs: lld = v(ccontmax(1)) + v(c cont max(2))
v(x̃) , where v(·) represents the volume of a clip/cuboid.
The second measure lt/h/whd examines the shuffling degree of x̃ in each dimension, computed as the normalized hamming distance: hamming(x̃) Nc(Nc − 1)/2 , where hamming(·) denotes the hamming distance in each dimension between the original piece sequence and the permuted one, and Nc represents the number of pieces in each dimension so thatNc(Nc−1)/2 indicates the maximum possible hamming distance in the dimension. CCMR is optimized using the Mean Squared Error (MSE) loss:
LCCMR = MSE([lld, l t hd, l h hd, l w hd], [l
′ ld, l t′ hd, l h′ hd, l w′ hd ]), (5)
where l ′ ld, l t′ hd, l h′ hd, l w′ hd are the prediction of the model.
3.4 OVERALL LEARNING OBJECTIVE
Our entire CSJ framework is optimized end-to-end with the learning objective defined as:
L = σ1LLCCD + σ2LCSPC + σ3LCLSC + σ4LCCMR, (6)
where σ1, σ2, σ3, σ4 denote the weights for the four losses. We deploy the adaptive weighting mechanism (Kendall et al., 2018) to weight these tasks, and thus there is no free hyper-parameters to tune. We also adopt curriculum learning (Bengio et al., 2009; Korbar et al., 2018) to train our network by shuffling clips from easy to hard. More details are presented in Appendix. A.1 and A.2.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We select three benchmark datasets for performance evaluation: UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and Kinetics-400 (K400) (Kay et al., 2017), containing 13K/7K/306K video clips from 101/51/400 action classes, respectively. In the self-supervised pretraining stage, we utilize the first training split of UCF101/HMDB51 and the training split of K400 without using their labels. As in Han et al. (2020), we adopt R2D3D as the backbone network, which is modified from R3D (Hara et al., 2018) with fewer parameters. By fine-tuning the pre-trained model, we can evaluate the SSL performance on a downstream task (i.e., action classification). Following Han et al. (2019); He et al. (2020), two evaluation protocols are used: comparisons against state-of-the-arts follow the more popular fully fine-tuning evaluation protocol, but ablation analysis takes both the linear evaluation and fully fine-tuning protocols. For the experiments on supervised learning, we report top-1 accuracy on the first test split of UCF101/HMDB51 as the standard (Han et al., 2020). More details of the datasets are provided in Appendix B.
4.2 IMPLEMENTATION DETAILS
Raw videos in these datasets are decoded at a frame rate of 24-30 fps. From each raw video, we start from a randomly selected frame index and sample a consecutive 16-frame video clip with a temporal stride of 4. For data augmentation, we first resize the video frames to 128×171 pixels, from which we extract random crops of size 112×112 pixels. We also apply random horizontal flipping and random color jittering to the video frames during training. We exploit only the raw RGB video frames as input, and do not leverage optical flow or other auxiliary signals for self-supervised pretraining. We adopt the Adam optimizer with a weight decay of 10−3 and a batch size of 8 per GPU (with a total of 32 GPUs). We deploy cosine annealing learning rate with an initial value of 10−4 and 100 epochs. The jigsaw puzzle piece sizes of {T,H,W} dimensions are set as 1, 4, 4, respectively. A 16×112×112 video clip thus contains 16×28×28 pieces. We set the temperature hyper-parameter τ to 0.07. A dropout of 0.5 is applied to the final layer of each task. More implementation details of the fine-tuning and test evaluation stages can be found in Appendix B.
4.3 MAIN RESULTS
Comparison in Action Recognition A standard way to evaluate a self-supervised video representation learning model is to use it to initialize an action recognition model on a small dataset. Specifically, after self-supervised pre-training on UCF101/HMDB51/K400, we exploit the learned backbone for fully fine-tuning on UCF101 and HMDB51, following Han et al. (2020); Wang et al. (2020).
We consider one baseline: fully-supervised learning with pre-training on K400. Note that this baseline is commonly regarded as the upper bound of self-supervised representation learning (Alwassel et al., 2019). From Table 1, we have the following observations: (1) Our CSJ achieves state-of-theart performance on both UCF101 and HMDB51. Particularly, with the backbone R2D3D-18 that is weaker than R(2+1)D-18, our CSJ performs comparably w.r.t. Pace on UCF101 but achieves a 10% improvement over Pace on HMDB51. (2) By exploiting spatiotemporal transformations for self-supervised representation learning, our CSJ beats either methods with only temporal transformations (†) or methods with both spatial and temporal transformations (‡), as well as those learning spatiotemporal representations (∗) via only contrastive learning (w./o. spatiotemporal transformations). (3) Our CSJ also outperforms CBT (Sun et al., 2019), which used ten-times more massive datasets (K600 (Carreira et al., 2018) + Howto100M (Miech et al., 2019)) and multiple modalities (RGB+Audio). (4) Our CSJ is the closest to the fully-supervised one (upper bound), validating its effectiveness in self-supervised video representation learning.
Comparison in Video Retrieval We evaluate our CSJ method in the video retrieval task. Following Xu et al. (2019), we extract each video clips’ embeddings with the pre-training model and use each clip in the test set to query the k nearest clips in the training set. The comparative results in Table 2 show that our method outperforms all other self-supervised methods and achieves new state-of-the-art in video retrieval on UCF101. Particularly, our method beats the latest competitor
Tasks Linear Probe Fully Fine-tuning
PRP (Yao et al., 2020) on four out of five metrics. This indicates that our proposed CSJ is also effective for video representation learning in video retrieval.
4.4 FURTHER EVALUATIONS
Ablation Study We conduct ablative experiments to validate the effectiveness of four CSJ surrogate tasks and two additional learning strategies. From Table 3, we can observe that: (1) Selfsupervised learning with each of the four tasks shows better generalization than fine-tuning the network from scratch (random initialization). (2) By training over all the four tasks jointly, we can achieve large performance gains (see ‘+LCCD’ vs. ‘CCMR’). (3) Each additional learning strategy (i.e., adaptive weighting or curriculum learning) leads to a small boost to the performance by 0.3- 0.5%. (4) Our full model achieves a remarkable classification accuracy of 70.4%, demonstrating the effectiveness of our proposed CSJ with only the RGB video stream (without additional optical flow, audio, or text modalities). More ablative analysis can be found in Appendix D.
Visualization of Attention Maps Fig. 3 visualizes the attention map of the last feature maps from two models fine-tuned on UCF101 with or without adopting our self-supervised pre-training. Since each frame’s attention map involves four adjacent frames, it actually contains spatiotemporal semantic features. We can see that our self-supervised pre-training with CSJ indeed helps to better capture meaningful spatiotemporal information and thus recognize the action categories more correctly.
Visualization of LCCD Predictions We also demonstrate the visualization of the LCCD predictions from the pre-trained models in Fig. 4. We can observe that solving the LCCD task indeed enables the model to learn the locations of LCCs and understand spatiotemporal continuity, which is a key step towards video content understanding.
5 CONCLUSION
We have introduced a novel self-supervised video representation learning method named Constrained Spatiotemporal Jigsaw (CSJ). By introducing constrained permutations, our proposed CSJ is the first to leverage spatiotemporal jigsaw in self-supervised video representation learning. We also propose four surrogate tasks based on our constrained spatiotemporal jigsaws. They are designed to encourage a video representation model to understand the spatiotemporal continuity, a key building block towards video content analysis. Extensive experiments were carried out to validate the effectiveness of each of the four CSJ tasks and also show that our approach achieves the state-of-the-art on two downstream tasks across various benchmarks.
A ADDITIONAL LEARNING STRATEGIES
A.1 ADAPTIVE WEIGHT
Formally, our CSJ has two continuous outputs y1, y4 from LCCD and CCMR, and two discrete outputs y2, y3 from CSPC and CLSC, modeled with Gaussian likelihoods and softmax likelihoods, respectively. The joint loss for these four tasks L(W, σ1, σ2, σ3, σ4) is:
L(W, σ1, σ2, σ3, σ4)
= − logN (y1; fW(x), σ21) · − logN (y4; fW(x), σ24)
· softmax(y2=c; fW(x), σ2) · softmax(y3=c; fW(x), σ3)
= 1
2σ21 ||y1 − fW(x)||2 + log σ1 −
1
2σ24 ||y4 − fW(x)||2 + log σ4
− log p(y2|fW(x), σ2)− log p(y3|fW(x), σ3)
≈ 1 2σ21 L1(W) + 1 σ22 L2(W) + 1 σ23 L3(W) + 1 2σ24 L4(W)
+ log σ1 + log σ2 + log σ3 + log σ4,
(7)
where σ is the weight factor that can be automatically learned from the network, and the log likelihood for the output y is defined as:
log p(y = c|fW(x), σ) = 1 σ2 fWc (x)−log ∑ c′ exp( 1 σ2 fWc′ (x)). (8)
A.2 CURRICULUM LEARNING
We adopt curriculum learning (Korbar et al., 2018) to train our network by shuffling clips from easy to hard. Let d be the shuffle degree of a shuffled clip x̃, representing the number of continuous cuboids in each dimension. We gradually increase d from 3 to 5 during the training phase to produce more permuted clips. Note that when the video content is ambiguous in one dimension, e.g., a static video clip inflated from an image, there is no temporal variance to learn the transformation. Kim et al. (2019); Noroozi & Favaro (2016) also mentioned this problem as similar-looking ambiguity. To solve this problem, we calculate the variance on each dimension and set a threshold. If the variance is lower than the threshold, we decrease d from 3 to 1 so that the pieces are not shuffled in the corresponding dimension.
B DATASETS AND IMPLEMENTATION
B.1 DETAILS OF DATASETS
UCF101 (Soomro et al., 2012) is a widely-used dataset in the action recognition task, which contains 13,320 videos with 101 action classes. The dataset is divided into three training/testing splits. In this paper, following prior works (Wang et al., 2020; Han et al., 2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
HMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits. Following Wang et al. (2020); Han et al. (2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
Kinetics-400 (K400) (Kay et al., 2017) is a very large action recognition dataset consisting of 400 human action classes and around 306k videos. In this work, we use the training split of K400 as the pre-training dataset.
B.2 IMPLEMENTATION DETAILS
In the fine-tuning stage, weights of convolutional layers are initialized with self-supervised pretraining, but weights of fully-connected layers are randomly initialized. The whole network is then trained with the cross-entropy loss. The pre-processing and training strategies are the same as in the
self-supervised pre-training stage, except that the total epochs are 300 and the initial learning rate is 10−3. We use a batch size of 64 per GPU and a total of 8 GPUs for fine-tuning.
We follow the standard evaluation protocol (Han et al., 2020) during inference and use ten-crop to take the same sequence length as training from the video. The predicted label of each video is calculated by averaging the softmax probabilities of all clips in the video.
C NETWORK ARCHITECTURE
We deploy the same network backbone R2D3D as Han et al. (2019; 2020), which is a 3D-ResNet (R3D) similar to Hara et al. (2018). The only difference between R2D3D and R3D lies in that: R2D3D keeps the first two residual blocks as 2D convolutional blocks while R3D uses 3D blocks. Therefore, the modified R2D3D has fewer parameters (only the last two blocks are 3D convolutions). We present the CNN structure of R2D3D in Table 4.
D ADDITIONAL ABLATION STUDIES
D.1 LCCD
Instead of predicting center points using the detection method, we also design a segmentation method – largest continuous cuboid segmentation (LCCS) to predicts the location of top-2 LCCs {ccontmax(j) : j = 1, 2}. The difference between LCCD and LCCS lies in that: LCCS is formulated as a segmentation task to discriminate whether a pixel is in the region of ccontmax(j). Concretely, LCCS predicts a binary mask M jLCCS where only points in the region of {c cont max(j) are set to be 1, otherwise 0. As a result, LCCS is optimized using the Cross Entropy (CE) loss at each point:
LLCCS = ∑
j∈{1,2} ∑ a∈x̃ CE(M jLCCS(a),M j LCCS(a) ′ ), (9)
where CE(·, ·) denotes the CE loss function, and M jLCCS(a) ′ is the predicted class of pixel a.
We report the performance of four different designs of LCCD in Table 5: (1) LCCS: LCCS is used instead of LCCD. (2) LCCD+MLCCS: The Gaussian mask MLCCD is substituted by the binary mask MLCCS, but the LCCD task is optimized using the MSE loss. (3) LCCD + L1: The LCCD task is
optimized by the L1 loss. (4) LCCD + MSE: The LCCD task is optimized by the MSE loss. From Table 5, it can be seen that the segmentation task also helps self-supervised representation learning but doesn’t perform as well as LCCD. Also, under the three different settings of LCCD, the MSE loss with the Gaussian map performs the best.
D.2 CLSC
Table 6 above shows the accuracies obtained with different temperatures τ used in contrastive learning. We can observe that: (1) When τ is in the range 1 ∼ 0.07, the accuracy increases with smaller τ . (2) When τ is large (e.g., 1), the accuracy drops considerably. In this work, τ is set to 0.0.
D.3 CSPC
In addition to our CSPC with 8 pattern categories (see Sec. 3.3), we consider another two designs: (1) 2 Categories: the shuffled clip is discriminated by whether it has the same relative order of the top-2 LCCs as the raw clip. It is almost the same as CLSC but is optimized by the CE loss. (2) 4 Categories: the shuffled clip is discriminated by how it differs from the raw clip: non-difference, spatial-only difference, temporal-only difference, spatiotemporal difference. From Table 7, we can see that CSPC with 8 categories outperforms the other two designs. These results support our motivation for leveraging spatiotemporal transformations.
D.4 CCMR
We report the performance of three different designs of CCMR: (1) ld: the learning degree lld is used as supervision, which only contains volume information. (2) hd: the hamming distances lthd, l h hd, l w hd are used, which contain only the relative order information. (3) ld + hd: both ld and hd are used as supervision. From Table 8, we can see that: First, both ld and hd help the model to learn continuous characteristics during pre-training, and hd outperforms ld by a small margin. Second, our CCMR learns the best representation by combining ld and hd.
D.5 RESULTS OF DIRECTLY SOLVING CSJ
We also demonstrate the results of solving the CSJ task directly in Table 9. We randomly shuffle video clips into 4 × 4 × 4 jigsaw puzzles. To recognize the correct permutation, the model solve a (4! × 4! × 4!)-way classification task in the pre-training stage. We compare the CSJ task with the joint LCCD+CCMR task under the same setting for fair comparison. Linear evaluation is adopted to show the effectiveness of different tasks. We can observe from the table that solving LCCD+CCMR jointly is more effective than solving CSJ directly.
E TEMPORAL ACTION SEGMENTATION
To show the effectiveness of our CSJ for solving new downstream tasks, we apply the pretrained model obtained by our CSJ to temporal action segmentation, which is more challenging than the
conventional action recognition and retrieval tasks. Specifically, we choose to compare our CSJ model with the latest competitor MemDPC (Han et al., 2020) on the Breakfast dataset (Kuehne et al., 2014). For fair comparison, our CSJ model and the MemDPC model adopt the same R2D3D34 backbone. Due the time constraint, from the original Breakfast dataset, we only use a small subset of 200 long videos as the training set for fine-tuning, and select a few long videos for the test. For temporal action segmentation, we follow the overall framework of MS-TCN (Abu Farha & Gall, 2019), but changes its backbone to R2D3D-34 pretrained by our CSJ or MemDPC.
We present the qualitative results on two test videos in Fig. 5. We can clearly observe that our CSJ outperforms MemDPC on both test videos. Particularly, the predictions of our CSJ are much closer to the ground truth, but MemDPC tends to produce unwanted segments for temporal action segmentation: it wrongly recognizes the segment (color in yellow) in the middle part of the first video as ‘Pour Milk’, and the segment (color in black) in the last part of the second video as ‘Stir Coffee’. In conclusion, as compared to the latest SSVRL method MemDPC, our CSJ can learn more robust features for temporal action segmentation due to its ‘true’ spatiotemporal jigsaw understanding. | 1. What is the main contribution of the paper regarding spatiotemporal jigsaw puzzles?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty and effectiveness of the surrogate tasks and training scheme?
4. Are there any concerns regarding the over-claiming of certain statements in the work?
5. How does the reviewer evaluate the performance of the method on downstream tasks, such as action recognition and video retrieval?
6. What are the suggestions for improving the paper, such as providing more complete tables and mitigating trivial learning?
7. How does the reviewer view the significance and impact of the paper in the context of self-supervised learning and video representation learning? | Review | Review
Summary This paper proposes a new way of formulating and solving "spatiotemporal jigsaw puzzles", as a self-supervised pretext task for learning useful video representations. Positive results on two downstream tasks, action recognition and video retrieval, are shown. The main contribution of the paper is a novel way of constraining the space of possible spatiotemporal permutations, in order to increase the tractability of the problem, and proposal of surrogate tasks that require learning and understanding of spatiotemporal continuity and correlations in order to solve them.
Strengths:
The pretext task of solving spatiotemporal jigsaw puzzles in order to learn meaningful video representations is well motivated, as has also been demonstrated in the literature.
Constraining the number of permutation to make the problem tractable indeed seems necessary, and the proposed method of doing so, along with the proposed surrogate tasks, are indeed effective, as indicated by the positive results on downstream tasks.
The ablation analysis performed shows a positive contribution of each of the described surrogate tasks and training scheme.
Weaknesses:
I found some of the major statements in this work to be over-claimed:
" To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding". As you mention in Sec. 2, Ahsan et al. (2019); Kim et al. (2019) both attempted to solve spatiotemporal jigsaw puzzles as a self-supervised pretexts task for learning spatiotemporal representation. This paper claims that the particular constraints imposed on the permutations used in those papers in order to increase the tractability of the problem, are not "true" spatiotemporal permutations. I believe this work at most relaxes some of those constraints, albeit in creative ways, but is not solving a fundamentally different problem.
Table 1, which demonstrates "state-of-the-art performance" on action recognition, is incomplete. Some stronger, not-included results of methods you did include in the table, and which, to the best of my knowledge, use only the RGB modality:
Pace | S3D-G | 87.1 | 52.6
SpeedNet | S3D-G | 81.1 | 48.8
for UCF101 (left) and HMDB51 (right).
For Table 2, which shows "new state-of-the-art in video retrieval", additional, stronger, "Pace" results exist :
Pace | C3D | 31.9 | 49.7 | 59.2 | 68.9 | 80.2
Pace | R(2+1)D | 25.6 | 42.7 | 51.3 | 61.3 | 74.0
for (L-R) top 1, 5, 10, 20, 50.
For the visualization (Sec. 4.4), it would be nice to see what the network attends to in order to solve the pretext task, before fine-tuning on UCF101 to solve action recognition.
Out of curiosity -- often in self-supervised learning the network tends to learn "artificial cues" (such as boundary or compression artifacts) which help it solve the pretext task, without really learning anything meaningful. Significant work is usually required to mitigate such trivial learning. Did you have a similar problem? I can't seem to find any documentation of such a phenomenon in your work.
In general, I find new SSL work especially interesting if it (A) enables solving new tasks that were unfeasible before, and/or (B) pushes the boundary of SSL results on interesting/important tasks. Since (A) is, according to my understanding outlined above, not accurately demonstrated, and (B) is not shown, I vote for rejecting this paper in its current form. I would gladly reconsider given stronger results or the demonstration of newly enabled tasks based on the proposed method.
Post-rebuttal
I'd like to thank the authors for addressing my comments. I've read through the other reviews and responses, as well as the revised paper. The presented method for learning "true spatiotemporal permutations" is novel, and does indeed seem to learn effective representations.
What I'm not entirely sure about is how much this method manages to push the boundary of SSL. Comparing methods with different backbones is indeed tricky, and my intention was definitely not to discourage SSL works from academia. But the burden of proof should be on the new method to perform as close to an apples-to-apples comparison (in terms of backbone) to existing methods as possible. In the end, there are many many potential pretext tasks for SSL of video representations, and I do feel that in order to be publishable at a top-tier venue, they should either enable new tasks, or show clear superiority over existing methods.
Regarding temporal action segmentation as a newly enabled task -- I honestly missed this section, since it's in the appendix. This should be moved to the main paper.
If I could, I would be borderline on this paper. But since I can't, I'll give the authors the benefit of the doubt, and raise my rating to 6 (marginally above). |
ICLR | Title
Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw
Abstract
This paper proposes a novel pretext task for self-supervised video representation learning by exploiting spatiotemporal continuity in videos. It is motivated by the fact that videos are spatiotemporal by nature and a representation learned to detect spatiotemporal continuity/discontinuity is thus beneficial for downstream video content analysis tasks. A natural choice of such a pretext task is to construct spatiotemporal (3D) jigsaw puzzles and learn to solve them. However, this task turns out to be intractable. We thus propose Constrained Spatiotemporal Jigsaw (CSJ) whereby the 3D jigsaws are formed in a constrained manner to ensure that large continuous spatiotemporal cuboids exist in a shuffled clip to provide sufficient cues for the model to reason about the continuity. With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Extensive experiments show that our CSJ achieves state-of-the-art on two downstream tasks across various benchmarks.
1 INTRODUCTION
Self-supervised learning (SSL) has achieved tremendous successes recently for static images (He et al., 2020; Chen et al., 2020) and shown to be able to outperform supervised learning on a wide range of downstream image understanding tasks. However, such successes have not yet been reproduced for videos. Since different SSL models differ mostly on the pretext tasks employed on the unlabeled training data, designing pretext tasks more suitable for videos is the current focus for self-supervised video representation learning (Han et al., 2020; Wang et al., 2020).
Videos are spatiotemporal data and spatiotemporal analysis is the key to many video content understanding tasks. A good video representation learned from the self-supervised pretext task should therefore capture discriminative information jointly along both spatial and temporal dimensions. It is thus somewhat counter-intuitive to note that most existing SSL pretext tasks for videos do not explicitly require joint spatiotemporal video understanding. For example, some spatial pretext tasks have been borrowed from images without any modification (Jing et al., 2018), ignoring the temporal dimension. On the other hand, many recent video-specific pretext tasks typically involve speed or temporal order prediction (Lee et al., 2017; Wei et al., 2018; Benaim et al., 2020; Wang et al., 2020), i.e., operating predominately along the temporal axis.
A natural choice for a spatiotemporal pretext task is to solve 3D jigsaw puzzles, whose 2D counterpart has been successfully used for images (Noroozi & Favaro, 2016). Indeed, solving 3D puzzles requires the learned model to understand spatiotemporal continuity, a key step towards video content understanding. However, directly solving a 3D puzzle turns out to be intractable: a puzzle of 3×3×3 pieces (the same size as a Rubik’s cube) can have 27! possible permutations. Video volume even in a short clip is much larger than that. Nevertheless, the latest neural sorting models (Paumard et al., 2020; Du et al., 2020) can only handle permutations a few orders of magnitude less, so offer no solution. This is hardly surprising because such a task is daunting even for humans: Most people would struggle with a standard Rubik’s cube, let alone a much larger one.
In this paper, we propose a novel Constrained Spatiotemporal Jigsaw (CSJ) pretext task for selfsupervised video representation learning. The key idea is to form 3D jigsaw puzzles in a constrained manner so that it becomes solvable. This is achieved by factorizing the permutations (shuffling)
into the three spatiotemporal dimensions and then applying them sequentially. This ensures that for a given video clip, large continuous spatiotemporal cuboids exist after the constrained shuffling to provide sufficient cues for the model to reason about spatiotemporal continuity (see Fig. 1(b)(c)). Such large continuous cuboids are also vital for human understanding of video as revealed in neuroscience and visual studies (Stringer et al., 2006; Chen et al., 2019). Even with the constrained puzzles, solving them directly could still be extremely hard. Consequently, instead of directly solving the puzzles (i.e., recovering the permutation matrix so that each piece can be put back), four surrogate tasks are carefully designed. They are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Concretely, given a video clip shuffled with our constrained permutations, we make sure that the top-2 largest continuous cuboids (LCCs) dominate the clip volume. The level of continuity in the shuffle clip as a whole is thus determined mainly by the volumes of these LCCs, and whether they are at the right order (see Fig. 1(d)(e)) both spatially and temporally. Our surrogate tasks are thus designed to locate these LCCs and predict their order so that the model learned with these tasks can be sensitive to spatiotemporal continuity both locally and globally.
Our main contributions are three-fold: (1) We introduce a new pretext task for self-supervised video representation learning called Constrained Spatiotemporal Jigsaw (CSJ). To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding. (2) We propose a novel constrained shuffling method to construct easy 3D jigsaws containing large LCCs. Four surrogate tasks are then formulated in place of the original jigsaw solving tasks. They are much more solvable yet remain effective in learning spatiotemporal discriminative representations. (3) Extensive experiments show that our approach achieves state-ofthe-art on two downstream tasks across various benchmarks.
2 RELATED WORK
Self-supervised Learning with Pretext Tasks Self-supervised learning (SSL) typically employs a pretext task to generate pseudo-labels for unlabeled data via some forms of data transformation. According to the transformations used by the pretext task, existing SSL methods for video presentation learning can be divided into three categories: (1) Spatial-Only Transformations: Derived from the original image domain (Gidaris et al., 2018), Jing et al. (2018) leveraged the spatial-only transformations for self-supervised video presentation learning. (2) Temporal-Only Transformations: Misra et al. (2016); Fernando et al. (2017); Lee et al. (2017); Wei et al. (2018) obtained shuffled video frames with the temporal-only transformations and then distinguished whether the shuffled frames are in chronological order. Xu et al. (2019) chose to shuffle video clips instead of frames. Benaim et al. (2020); Yao et al. (2020); Jenni et al. (2020) exploited the speed transformation via determining whether one video clip is accelerated. (3) Spatiotemporal Transformations: There are only a few recent approaches (Ahsan et al., 2019; Kim et al., 2019) that leveraged both spatial and temporal transformations by permuting 3D spatiotemporal cuboids. However, due to the aforementioned
intractability of solving the spatiotemporal jigsaw puzzles, they only leveraged either temporal or spatial permutations as training signals, i.e., they exploited the two domains independently. Therefore, no true spatiotemporal permutations have been considered in Ahsan et al. (2019); Kim et al. (2019). In contrast, given that both spatial appearances and temporal relations are important cues for video representation learning, the focus of this work is on investigating how to exploit the spatial and temporal continuity jointly for self-supervised video presentation learning. To that end, our Constrained Spatiotemporal Jigsaw (CSJ) presents the first spatiotemporal continuity based pretext task for video SSL, thanks to a novel constrained 3D jigsaw and four surrogate tasks to reason about the continuity in the 3D jigsaw puzzles without solving them directly.
Self-supervised Learning with Contrastive Learning Contrastive learning is another selfsupervised learning approach that has become increasingly popular in the image domain (Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020). Recently, it has been incorporated into video SSL as well. Contrastive learning and transformation based pretext tasks are orthogonal to each other and often combined in that different transformed versions of a data sample form the positive set used in contrastive learning. In El-Nouby et al. (2019); Knights et al. (2020); Qian et al. (2020); Wang et al. (2020); Yang et al. (2020), the positive/negative samples were generated based on temporal transformations only. In contrast, some recent works (Han et al., 2019; 2020; Zhuang et al., 2020) leveraged features from the future frame embeddings or with the memory bank (Wu et al., 2018). They modeled spatiotemporal representations using only contrastive learning without transformations. Contrastive learning is also exploited in one of our surrogate pretext tasks. Different from existing works, we explore the spatiotemporal transformations in the form of CSJ and employ contrastive learning to distinguish different levels of spatiotemporal continuity in shuffled jigsaws. This enables us to learn more discriminative spatiotemporal representations.
3 CONSTRAINED SPATIOTEMPORAL JIGSAW
3.1 PROBLEM DEFINITION
The main goal of self-supervised video representation learning is to learn a video feature representation function f(·) without using any human annotations. A general approach to achieving this goal is to generate a supervisory signal y from an unlabeled video clip x and construct a pretext task P to predict y from f(x). The process of solving the pretext task P encourages f(·) to learn discriminative spatiotemporal representations.
The pretext task P is constructed typically by applying to a video clip a transformation function t(·;θ) parameterized by θ and then automatically deriving y from θ, e.g., y can be the type of the transformation. Based on this premise, P is defined as the prediction of y using the feature map of the transformed video clip f(x̃), i.e., P : f(x̃) → y, where x̃ = t(x;θ). For example, in Lee et al. (2017), t(·;θ) denotes a temporal transformation that permutes the four frames of video clip x in a temporal order θ, x̃ = t(x;θ) is the shuffled clip, and the pseudo-label y is defined as the permutation order θ (e.g., 1324, 4312, etc.). The pretext task P is then a classification problem of 24 categories because there are 4! = 24 possible orders.
3.2 CONSTRAINED PERMUTATIONS
Solving spatiotemporal video jigsaw puzzles seems to be an ideal pretext task for learning discriminative representation as it requires an understanding of spatiotemporal continuity. After shuffling the pixels in a video clip using a 3D permutation matrix, the pretext task is to recover the permutation matrix. However, as explained earlier, this task is intractable given even moderate video clip sizes. Our solution is to introduce constraints on the permutations. As a result, a new pretext task PCSJ based on Constrained Spatiotemporal Jigsaw (see Fig. 2(a)) is formulated, which is much easier to solve than a random/unconstrained jigsaw.
Specifically, our goal is to introduce constraints to the permutations so that the resultant shuffled video clip is guaranteed to have large continuous cuboids (see Fig. 2(a)). Similar to humans (Stringer et al., 2006), having large continuous cuboids is key for a model to understand a 3D jigsaw and therefore to have any chance to solve it. Formally, the volume of a shuffled video clip x̃ are denoted as {T,H,W}, measuring its sizes along the temporal, height, and width dimensions, respectively. A cuboid is defined as a crop of x̃: c = x̃t1:t2,h1:h2,w1:w2 , where t1, t2 ∈ {1, 2, . . . , T}, h1, h2 ∈
{1, 2, . . . ,H}, w1, w2 ∈ {1, 2, . . . ,W}. If all the jigsaw pieces (smallest video clip unit, e.g. a pixel or a 3D pixel block) in c keep the same relative order as they were in x (before being shuffled), we call the cuboid c as a continuous cuboid ccont. The cuboid’s volume equals (t2 − t1)× (h2 − h1)× (w2 − w1), and the largest continuous cuboid (LCC) ccontmax is the ccont with the largest volume. We introduce two permutation strategies to ensure that the volumes of LCCs are large in relation to the whole video clip volume after our shuffling transformation t(·;θCSJ). First, instead of shuffling x in three spatiotemporal dimensions simultaneously, t(·;θCSJ) factorizes the permutations into the three spatiotemporal dimensions and then utilizes them sequentially to generate shuffled clips, e.g., in the order of T,W,H and only once. Note that the volume of the generated x̃ stays the same with different permutation orders (e.g., TWH and HTW ). Second, we shuffle a group of jigsaw pieces together instead of each piece individually along each dimension. Taking spatial shuffling as an example, if there are 8 pieces per frame (along each of the two spatial dimensions), θCSJ could be represented as the permutation from {12345678} to {84567123}. The longest and the secondlongest index ranges are: [2, 5] for coordinates {4567}, and [6, 8] for coordinates {123}. With these two permutation strategies, not only do we have large LCCs, but also they are guaranteed to have clearly separable boundaries (see Fig. 2(b)) with surrounding pieces due to the factorized and grouped permutation design. This means that they are easily detectable.
3.3 SURROGATE TASKS
Having permutation constraints preserves more spatiotemporal continuity in the shuffled clip and reduces the amount of possible permutations. But exploiting these constraints to make a neural sorting model tractable is still far from trivial. Instead of solving the jigsaw directly, our PCSJ is thus formulated as four surrogate tasks: Largest Continuous Cuboid Detection (LCCD), Clip Shuffling Pattern Classification (CSPC), Contrastive Learning over Shuffled Clips (CLSC), and Clip Continuity Measure Regression (CCMR). As illustrated in Fig. 2(b), given an unlabeled clip x, we first construct a mini-batch of 8 clips {x̃1, x̃2, ..., x̃8} by shuffling x with different but related constrained permutations (to be detailed later). These shuffled clips and the raw clip x are then fed into a 3D CNN model f(·) for spatiotemporal representation learning with a non-local operation (Wang et al., 2018):
fNL(x̃i) = NL(f(x̃i), f(x)), (1)
where NL(·, ·) denotes the non-local operator, and f(x̃i) and f(x) denote the feature map of x̃i and x from the last convolutional layer of f(·), respectively. The resultant feature map fNL(x̃i) is further passed through a spatial pooling layer followed by a separately fully-connected layer for
each surrogate task. Note that the raw video feature map f(x) is used as guidance through the nonlocal based attention mechanism to help fulfill the tasks. This is similar to humans needing to see the completed jigsaw picture to help solve the puzzle.
Before we detail the four tasks, we first explain how the eight permutations from the same raw clip are generated. First, the factorized and grouped permutations are applied to x to create one shuffled clip. By examining the largest and the second-largest continuous puzzle piece numbers of each dimension ({T,H,W}), we can easily identify the top-2 largest continuous cuboids (LCCs). Next, by varying the relative order of the top-2 LCCs either in the correct (original) order or the reverse order in each dimension, 2×2×2=8 permutations are obtained. By controlling the group size in permutation, we can make sure that the top-2 LCCs account for a large proportion, saying 80% of the total clip volume. Our four tasks are thus centered around these two LCCs as they largely determine the overall spatiotemporal continuity of the shuffled clip.
The first task LCCD is to locate the top-2 LCCs {ccontmax(j) : j = 1, 2} and formulated as a regression problem. Given a ground-truth LCC ccontmax(j), a Gaussian kernel is applied to its center to depict the possibility of each pixel in x̃ belonging to the LCC. This leads to a soft mask M jLCCD with the same
size of x̃: M jLCCD is all 0 outside the region of c cont max(j), and exp(−
||a− ac||2
2σ2g ) inside the region,
where a,ac denote any pixel and the center point, respectively. σg is the hyper-parameter which is set as 1 empirically. In the training stage, FPN (Lin et al., 2017) is used for multi-level feature fusion. LCCD is optimized using the MSE loss in each point:
LLCCD = ∑
j∈{1,2} ∑ a∈x̃ MSE(M jLCCD(a),M j LCCD(a) ′ ), (2)
where MSE(·, ·) denotes the MSE loss function, and M jLCCD(a) ′ is the prediction of each pixel a.
CSPC is designed to recognize the shuffling pattern of a shuffled clip. As mentioned early, the eight shuffled clips in each mini-batch are created from the same raw clip and differ only in the relative order of the top-2 LCCs along each of the three dimensions. There are thus eight permutations depending on the order (correct or reverse) in each dimension. Based on this understanding, CSPC is formulated as a multi-class classification task to recognize each shuffled clip into one of these eight classes, which is optimized using the Cross-Entropy (CE) loss:
LCSPC = ∑
i∈{0,1,...,7}
CE(lCSPC[i], l ′ CSPC[i]), (3)
where CE(·, ·) denotes the CE loss function and l ′
CSPC[i] is the predicted class label of i-th sample (shuffled clip) in each mini-batch.
The two tasks above emphasize on local spatiotemporal continuity understanding. In contrast, CLSC leverages the contrastive loss to encourage global continuity understanding. In particular, since the top-2 LCCs dominate the volume of a clip, it is safe to assume that if their relative order is correct in all three dimensions, the shuffled clip largely preserve continuity compared to the original clip, while all other 7 permutations feature large discontinuity in at least one dimension. We thus form a contrastive learning task with the original video x and the most continuous shuffled video x̃i as a positive pair, and x and the rest x̃j (j 6= i) as negative pairs. CLSC is optimized using the Noise Contrastive Estimation (NCE) (Tian et al., 2020) loss:
LCLSC = −log exp(sim(f(x), f(x̃i))/τ) exp(sim(f(x), f(x̃i))/τ) + ∑ j exp(sim(f(x), f(x̃j))/τ) , (4)
where sim(·, ·) is defined by the dot product: f(x)>f(x̃i), and τ is the temperature hyper-parameter. Note that the non-local operator is not used in CLSC.
CCMR is similar to CLSC in that it also enforces global continuity understanding, but differs in that it is a regression task aimed at predicting a global continuity measure. We consider two such measures. Since the total size of the top-2 LCCs {ccontmax(j) : j = 1, 2} is a good indicator of how continuous a shuffle video clip is, the first measure lld directly measures the relative total size of the top-2 LCCs: lld = v(ccontmax(1)) + v(c cont max(2))
v(x̃) , where v(·) represents the volume of a clip/cuboid.
The second measure lt/h/whd examines the shuffling degree of x̃ in each dimension, computed as the normalized hamming distance: hamming(x̃) Nc(Nc − 1)/2 , where hamming(·) denotes the hamming distance in each dimension between the original piece sequence and the permuted one, and Nc represents the number of pieces in each dimension so thatNc(Nc−1)/2 indicates the maximum possible hamming distance in the dimension. CCMR is optimized using the Mean Squared Error (MSE) loss:
LCCMR = MSE([lld, l t hd, l h hd, l w hd], [l
′ ld, l t′ hd, l h′ hd, l w′ hd ]), (5)
where l ′ ld, l t′ hd, l h′ hd, l w′ hd are the prediction of the model.
3.4 OVERALL LEARNING OBJECTIVE
Our entire CSJ framework is optimized end-to-end with the learning objective defined as:
L = σ1LLCCD + σ2LCSPC + σ3LCLSC + σ4LCCMR, (6)
where σ1, σ2, σ3, σ4 denote the weights for the four losses. We deploy the adaptive weighting mechanism (Kendall et al., 2018) to weight these tasks, and thus there is no free hyper-parameters to tune. We also adopt curriculum learning (Bengio et al., 2009; Korbar et al., 2018) to train our network by shuffling clips from easy to hard. More details are presented in Appendix. A.1 and A.2.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We select three benchmark datasets for performance evaluation: UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and Kinetics-400 (K400) (Kay et al., 2017), containing 13K/7K/306K video clips from 101/51/400 action classes, respectively. In the self-supervised pretraining stage, we utilize the first training split of UCF101/HMDB51 and the training split of K400 without using their labels. As in Han et al. (2020), we adopt R2D3D as the backbone network, which is modified from R3D (Hara et al., 2018) with fewer parameters. By fine-tuning the pre-trained model, we can evaluate the SSL performance on a downstream task (i.e., action classification). Following Han et al. (2019); He et al. (2020), two evaluation protocols are used: comparisons against state-of-the-arts follow the more popular fully fine-tuning evaluation protocol, but ablation analysis takes both the linear evaluation and fully fine-tuning protocols. For the experiments on supervised learning, we report top-1 accuracy on the first test split of UCF101/HMDB51 as the standard (Han et al., 2020). More details of the datasets are provided in Appendix B.
4.2 IMPLEMENTATION DETAILS
Raw videos in these datasets are decoded at a frame rate of 24-30 fps. From each raw video, we start from a randomly selected frame index and sample a consecutive 16-frame video clip with a temporal stride of 4. For data augmentation, we first resize the video frames to 128×171 pixels, from which we extract random crops of size 112×112 pixels. We also apply random horizontal flipping and random color jittering to the video frames during training. We exploit only the raw RGB video frames as input, and do not leverage optical flow or other auxiliary signals for self-supervised pretraining. We adopt the Adam optimizer with a weight decay of 10−3 and a batch size of 8 per GPU (with a total of 32 GPUs). We deploy cosine annealing learning rate with an initial value of 10−4 and 100 epochs. The jigsaw puzzle piece sizes of {T,H,W} dimensions are set as 1, 4, 4, respectively. A 16×112×112 video clip thus contains 16×28×28 pieces. We set the temperature hyper-parameter τ to 0.07. A dropout of 0.5 is applied to the final layer of each task. More implementation details of the fine-tuning and test evaluation stages can be found in Appendix B.
4.3 MAIN RESULTS
Comparison in Action Recognition A standard way to evaluate a self-supervised video representation learning model is to use it to initialize an action recognition model on a small dataset. Specifically, after self-supervised pre-training on UCF101/HMDB51/K400, we exploit the learned backbone for fully fine-tuning on UCF101 and HMDB51, following Han et al. (2020); Wang et al. (2020).
We consider one baseline: fully-supervised learning with pre-training on K400. Note that this baseline is commonly regarded as the upper bound of self-supervised representation learning (Alwassel et al., 2019). From Table 1, we have the following observations: (1) Our CSJ achieves state-of-theart performance on both UCF101 and HMDB51. Particularly, with the backbone R2D3D-18 that is weaker than R(2+1)D-18, our CSJ performs comparably w.r.t. Pace on UCF101 but achieves a 10% improvement over Pace on HMDB51. (2) By exploiting spatiotemporal transformations for self-supervised representation learning, our CSJ beats either methods with only temporal transformations (†) or methods with both spatial and temporal transformations (‡), as well as those learning spatiotemporal representations (∗) via only contrastive learning (w./o. spatiotemporal transformations). (3) Our CSJ also outperforms CBT (Sun et al., 2019), which used ten-times more massive datasets (K600 (Carreira et al., 2018) + Howto100M (Miech et al., 2019)) and multiple modalities (RGB+Audio). (4) Our CSJ is the closest to the fully-supervised one (upper bound), validating its effectiveness in self-supervised video representation learning.
Comparison in Video Retrieval We evaluate our CSJ method in the video retrieval task. Following Xu et al. (2019), we extract each video clips’ embeddings with the pre-training model and use each clip in the test set to query the k nearest clips in the training set. The comparative results in Table 2 show that our method outperforms all other self-supervised methods and achieves new state-of-the-art in video retrieval on UCF101. Particularly, our method beats the latest competitor
Tasks Linear Probe Fully Fine-tuning
PRP (Yao et al., 2020) on four out of five metrics. This indicates that our proposed CSJ is also effective for video representation learning in video retrieval.
4.4 FURTHER EVALUATIONS
Ablation Study We conduct ablative experiments to validate the effectiveness of four CSJ surrogate tasks and two additional learning strategies. From Table 3, we can observe that: (1) Selfsupervised learning with each of the four tasks shows better generalization than fine-tuning the network from scratch (random initialization). (2) By training over all the four tasks jointly, we can achieve large performance gains (see ‘+LCCD’ vs. ‘CCMR’). (3) Each additional learning strategy (i.e., adaptive weighting or curriculum learning) leads to a small boost to the performance by 0.3- 0.5%. (4) Our full model achieves a remarkable classification accuracy of 70.4%, demonstrating the effectiveness of our proposed CSJ with only the RGB video stream (without additional optical flow, audio, or text modalities). More ablative analysis can be found in Appendix D.
Visualization of Attention Maps Fig. 3 visualizes the attention map of the last feature maps from two models fine-tuned on UCF101 with or without adopting our self-supervised pre-training. Since each frame’s attention map involves four adjacent frames, it actually contains spatiotemporal semantic features. We can see that our self-supervised pre-training with CSJ indeed helps to better capture meaningful spatiotemporal information and thus recognize the action categories more correctly.
Visualization of LCCD Predictions We also demonstrate the visualization of the LCCD predictions from the pre-trained models in Fig. 4. We can observe that solving the LCCD task indeed enables the model to learn the locations of LCCs and understand spatiotemporal continuity, which is a key step towards video content understanding.
5 CONCLUSION
We have introduced a novel self-supervised video representation learning method named Constrained Spatiotemporal Jigsaw (CSJ). By introducing constrained permutations, our proposed CSJ is the first to leverage spatiotemporal jigsaw in self-supervised video representation learning. We also propose four surrogate tasks based on our constrained spatiotemporal jigsaws. They are designed to encourage a video representation model to understand the spatiotemporal continuity, a key building block towards video content analysis. Extensive experiments were carried out to validate the effectiveness of each of the four CSJ tasks and also show that our approach achieves the state-of-the-art on two downstream tasks across various benchmarks.
A ADDITIONAL LEARNING STRATEGIES
A.1 ADAPTIVE WEIGHT
Formally, our CSJ has two continuous outputs y1, y4 from LCCD and CCMR, and two discrete outputs y2, y3 from CSPC and CLSC, modeled with Gaussian likelihoods and softmax likelihoods, respectively. The joint loss for these four tasks L(W, σ1, σ2, σ3, σ4) is:
L(W, σ1, σ2, σ3, σ4)
= − logN (y1; fW(x), σ21) · − logN (y4; fW(x), σ24)
· softmax(y2=c; fW(x), σ2) · softmax(y3=c; fW(x), σ3)
= 1
2σ21 ||y1 − fW(x)||2 + log σ1 −
1
2σ24 ||y4 − fW(x)||2 + log σ4
− log p(y2|fW(x), σ2)− log p(y3|fW(x), σ3)
≈ 1 2σ21 L1(W) + 1 σ22 L2(W) + 1 σ23 L3(W) + 1 2σ24 L4(W)
+ log σ1 + log σ2 + log σ3 + log σ4,
(7)
where σ is the weight factor that can be automatically learned from the network, and the log likelihood for the output y is defined as:
log p(y = c|fW(x), σ) = 1 σ2 fWc (x)−log ∑ c′ exp( 1 σ2 fWc′ (x)). (8)
A.2 CURRICULUM LEARNING
We adopt curriculum learning (Korbar et al., 2018) to train our network by shuffling clips from easy to hard. Let d be the shuffle degree of a shuffled clip x̃, representing the number of continuous cuboids in each dimension. We gradually increase d from 3 to 5 during the training phase to produce more permuted clips. Note that when the video content is ambiguous in one dimension, e.g., a static video clip inflated from an image, there is no temporal variance to learn the transformation. Kim et al. (2019); Noroozi & Favaro (2016) also mentioned this problem as similar-looking ambiguity. To solve this problem, we calculate the variance on each dimension and set a threshold. If the variance is lower than the threshold, we decrease d from 3 to 1 so that the pieces are not shuffled in the corresponding dimension.
B DATASETS AND IMPLEMENTATION
B.1 DETAILS OF DATASETS
UCF101 (Soomro et al., 2012) is a widely-used dataset in the action recognition task, which contains 13,320 videos with 101 action classes. The dataset is divided into three training/testing splits. In this paper, following prior works (Wang et al., 2020; Han et al., 2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
HMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits. Following Wang et al. (2020); Han et al. (2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
Kinetics-400 (K400) (Kay et al., 2017) is a very large action recognition dataset consisting of 400 human action classes and around 306k videos. In this work, we use the training split of K400 as the pre-training dataset.
B.2 IMPLEMENTATION DETAILS
In the fine-tuning stage, weights of convolutional layers are initialized with self-supervised pretraining, but weights of fully-connected layers are randomly initialized. The whole network is then trained with the cross-entropy loss. The pre-processing and training strategies are the same as in the
self-supervised pre-training stage, except that the total epochs are 300 and the initial learning rate is 10−3. We use a batch size of 64 per GPU and a total of 8 GPUs for fine-tuning.
We follow the standard evaluation protocol (Han et al., 2020) during inference and use ten-crop to take the same sequence length as training from the video. The predicted label of each video is calculated by averaging the softmax probabilities of all clips in the video.
C NETWORK ARCHITECTURE
We deploy the same network backbone R2D3D as Han et al. (2019; 2020), which is a 3D-ResNet (R3D) similar to Hara et al. (2018). The only difference between R2D3D and R3D lies in that: R2D3D keeps the first two residual blocks as 2D convolutional blocks while R3D uses 3D blocks. Therefore, the modified R2D3D has fewer parameters (only the last two blocks are 3D convolutions). We present the CNN structure of R2D3D in Table 4.
D ADDITIONAL ABLATION STUDIES
D.1 LCCD
Instead of predicting center points using the detection method, we also design a segmentation method – largest continuous cuboid segmentation (LCCS) to predicts the location of top-2 LCCs {ccontmax(j) : j = 1, 2}. The difference between LCCD and LCCS lies in that: LCCS is formulated as a segmentation task to discriminate whether a pixel is in the region of ccontmax(j). Concretely, LCCS predicts a binary mask M jLCCS where only points in the region of {c cont max(j) are set to be 1, otherwise 0. As a result, LCCS is optimized using the Cross Entropy (CE) loss at each point:
LLCCS = ∑
j∈{1,2} ∑ a∈x̃ CE(M jLCCS(a),M j LCCS(a) ′ ), (9)
where CE(·, ·) denotes the CE loss function, and M jLCCS(a) ′ is the predicted class of pixel a.
We report the performance of four different designs of LCCD in Table 5: (1) LCCS: LCCS is used instead of LCCD. (2) LCCD+MLCCS: The Gaussian mask MLCCD is substituted by the binary mask MLCCS, but the LCCD task is optimized using the MSE loss. (3) LCCD + L1: The LCCD task is
optimized by the L1 loss. (4) LCCD + MSE: The LCCD task is optimized by the MSE loss. From Table 5, it can be seen that the segmentation task also helps self-supervised representation learning but doesn’t perform as well as LCCD. Also, under the three different settings of LCCD, the MSE loss with the Gaussian map performs the best.
D.2 CLSC
Table 6 above shows the accuracies obtained with different temperatures τ used in contrastive learning. We can observe that: (1) When τ is in the range 1 ∼ 0.07, the accuracy increases with smaller τ . (2) When τ is large (e.g., 1), the accuracy drops considerably. In this work, τ is set to 0.0.
D.3 CSPC
In addition to our CSPC with 8 pattern categories (see Sec. 3.3), we consider another two designs: (1) 2 Categories: the shuffled clip is discriminated by whether it has the same relative order of the top-2 LCCs as the raw clip. It is almost the same as CLSC but is optimized by the CE loss. (2) 4 Categories: the shuffled clip is discriminated by how it differs from the raw clip: non-difference, spatial-only difference, temporal-only difference, spatiotemporal difference. From Table 7, we can see that CSPC with 8 categories outperforms the other two designs. These results support our motivation for leveraging spatiotemporal transformations.
D.4 CCMR
We report the performance of three different designs of CCMR: (1) ld: the learning degree lld is used as supervision, which only contains volume information. (2) hd: the hamming distances lthd, l h hd, l w hd are used, which contain only the relative order information. (3) ld + hd: both ld and hd are used as supervision. From Table 8, we can see that: First, both ld and hd help the model to learn continuous characteristics during pre-training, and hd outperforms ld by a small margin. Second, our CCMR learns the best representation by combining ld and hd.
D.5 RESULTS OF DIRECTLY SOLVING CSJ
We also demonstrate the results of solving the CSJ task directly in Table 9. We randomly shuffle video clips into 4 × 4 × 4 jigsaw puzzles. To recognize the correct permutation, the model solve a (4! × 4! × 4!)-way classification task in the pre-training stage. We compare the CSJ task with the joint LCCD+CCMR task under the same setting for fair comparison. Linear evaluation is adopted to show the effectiveness of different tasks. We can observe from the table that solving LCCD+CCMR jointly is more effective than solving CSJ directly.
E TEMPORAL ACTION SEGMENTATION
To show the effectiveness of our CSJ for solving new downstream tasks, we apply the pretrained model obtained by our CSJ to temporal action segmentation, which is more challenging than the
conventional action recognition and retrieval tasks. Specifically, we choose to compare our CSJ model with the latest competitor MemDPC (Han et al., 2020) on the Breakfast dataset (Kuehne et al., 2014). For fair comparison, our CSJ model and the MemDPC model adopt the same R2D3D34 backbone. Due the time constraint, from the original Breakfast dataset, we only use a small subset of 200 long videos as the training set for fine-tuning, and select a few long videos for the test. For temporal action segmentation, we follow the overall framework of MS-TCN (Abu Farha & Gall, 2019), but changes its backbone to R2D3D-34 pretrained by our CSJ or MemDPC.
We present the qualitative results on two test videos in Fig. 5. We can clearly observe that our CSJ outperforms MemDPC on both test videos. Particularly, the predictions of our CSJ are much closer to the ground truth, but MemDPC tends to produce unwanted segments for temporal action segmentation: it wrongly recognizes the segment (color in yellow) in the middle part of the first video as ‘Pour Milk’, and the segment (color in black) in the last part of the second video as ‘Stir Coffee’. In conclusion, as compared to the latest SSVRL method MemDPC, our CSJ can learn more robust features for temporal action segmentation due to its ‘true’ spatiotemporal jigsaw understanding. | 1. What is the focus of the paper regarding self-supervised video representation learning?
2. What are the strengths of the proposed approach, particularly in its surrogate tasks design?
3. What are the weaknesses of the paper, especially regarding its lack of insightful analysis and experimental analysis?
4. How does the reviewer assess the writing quality of the paper, particularly in certain sections?
5. Does the reviewer think the paper is suitable for publication in a top conference like ICLR? | Review | Review
The paper presents a novel pretext task for self-supervised video representation learning (SSVRL). The authors design several surrogate tasks for tackling intentionally constructed constrained spatiotemporal jigsaw puzzles. The learned representations during training to solve the surrogate tasks can be transferred to other video tasks. The proposed method shows superior performances than state-of-the-art SSVRL approaches on action recognition and video retrieval benchmarks.
Strengths:
+. Good performances on two benchmarks.
+. Carefully designed surrogate tasks.
Weaknesses:
-. Lack insightful analysis of how the idea is inspired, why it works. It seems the intuition of the paper is to make the 3D jigsaw problem easier to solve and it will just work. But why the easier problem could help learn better representations? Each of the two steps making the problem easier need to be analyzed more thoroughly: first, making the unconstrained jigsaw problem constrained; second, solving the surrogate tasks instead of solving the constrained jigsaw problem. Actually, the carefully designed surrogate tasks are quite different from the constrained jigsaw problem. They seems more ad-hoc but not a principled way to tackle the jigsaw problem. All these questions need more indepth clarification.
-. Experimental analysis is not thorough. In case the proposed method is not a principled method, but a carefully designed method. Extensive experiments of different variations of the proposed method could help better understand why the method works. A good performance on well-established benchmarks might be impressive, but analysis of why the performance can be achieved is more important.
-. Writing needs improvements. In the exposition of the proposed method section, some sentences are casual and misleading. For example, the third paragraph of sec 3.2. Besides, section 3.3 is a little bit difficult to follow. It could be possibly revised more concisely.
Summary
Overall, the paper presents yet another method to design the pretext task for SSVRL. But my major concern is it lacks enough insights for inspiring future research for this topic. It might not be good enough for ICLR. |
ICLR | Title
Self-Supervised Video Representation Learning with Constrained Spatiotemporal Jigsaw
Abstract
This paper proposes a novel pretext task for self-supervised video representation learning by exploiting spatiotemporal continuity in videos. It is motivated by the fact that videos are spatiotemporal by nature and a representation learned to detect spatiotemporal continuity/discontinuity is thus beneficial for downstream video content analysis tasks. A natural choice of such a pretext task is to construct spatiotemporal (3D) jigsaw puzzles and learn to solve them. However, this task turns out to be intractable. We thus propose Constrained Spatiotemporal Jigsaw (CSJ) whereby the 3D jigsaws are formed in a constrained manner to ensure that large continuous spatiotemporal cuboids exist in a shuffled clip to provide sufficient cues for the model to reason about the continuity. With the constrained jigsaw puzzles, instead of solving them directly, which could still be extremely hard, we carefully design four surrogate tasks that are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Extensive experiments show that our CSJ achieves state-of-the-art on two downstream tasks across various benchmarks.
1 INTRODUCTION
Self-supervised learning (SSL) has achieved tremendous successes recently for static images (He et al., 2020; Chen et al., 2020) and shown to be able to outperform supervised learning on a wide range of downstream image understanding tasks. However, such successes have not yet been reproduced for videos. Since different SSL models differ mostly on the pretext tasks employed on the unlabeled training data, designing pretext tasks more suitable for videos is the current focus for self-supervised video representation learning (Han et al., 2020; Wang et al., 2020).
Videos are spatiotemporal data and spatiotemporal analysis is the key to many video content understanding tasks. A good video representation learned from the self-supervised pretext task should therefore capture discriminative information jointly along both spatial and temporal dimensions. It is thus somewhat counter-intuitive to note that most existing SSL pretext tasks for videos do not explicitly require joint spatiotemporal video understanding. For example, some spatial pretext tasks have been borrowed from images without any modification (Jing et al., 2018), ignoring the temporal dimension. On the other hand, many recent video-specific pretext tasks typically involve speed or temporal order prediction (Lee et al., 2017; Wei et al., 2018; Benaim et al., 2020; Wang et al., 2020), i.e., operating predominately along the temporal axis.
A natural choice for a spatiotemporal pretext task is to solve 3D jigsaw puzzles, whose 2D counterpart has been successfully used for images (Noroozi & Favaro, 2016). Indeed, solving 3D puzzles requires the learned model to understand spatiotemporal continuity, a key step towards video content understanding. However, directly solving a 3D puzzle turns out to be intractable: a puzzle of 3×3×3 pieces (the same size as a Rubik’s cube) can have 27! possible permutations. Video volume even in a short clip is much larger than that. Nevertheless, the latest neural sorting models (Paumard et al., 2020; Du et al., 2020) can only handle permutations a few orders of magnitude less, so offer no solution. This is hardly surprising because such a task is daunting even for humans: Most people would struggle with a standard Rubik’s cube, let alone a much larger one.
In this paper, we propose a novel Constrained Spatiotemporal Jigsaw (CSJ) pretext task for selfsupervised video representation learning. The key idea is to form 3D jigsaw puzzles in a constrained manner so that it becomes solvable. This is achieved by factorizing the permutations (shuffling)
into the three spatiotemporal dimensions and then applying them sequentially. This ensures that for a given video clip, large continuous spatiotemporal cuboids exist after the constrained shuffling to provide sufficient cues for the model to reason about spatiotemporal continuity (see Fig. 1(b)(c)). Such large continuous cuboids are also vital for human understanding of video as revealed in neuroscience and visual studies (Stringer et al., 2006; Chen et al., 2019). Even with the constrained puzzles, solving them directly could still be extremely hard. Consequently, instead of directly solving the puzzles (i.e., recovering the permutation matrix so that each piece can be put back), four surrogate tasks are carefully designed. They are more solvable but meanwhile still ensure that the learned representation is sensitive to spatiotemporal continuity at both the local and global levels. Concretely, given a video clip shuffled with our constrained permutations, we make sure that the top-2 largest continuous cuboids (LCCs) dominate the clip volume. The level of continuity in the shuffle clip as a whole is thus determined mainly by the volumes of these LCCs, and whether they are at the right order (see Fig. 1(d)(e)) both spatially and temporally. Our surrogate tasks are thus designed to locate these LCCs and predict their order so that the model learned with these tasks can be sensitive to spatiotemporal continuity both locally and globally.
Our main contributions are three-fold: (1) We introduce a new pretext task for self-supervised video representation learning called Constrained Spatiotemporal Jigsaw (CSJ). To our best knowledge, this is the first work on self-supervised video representation learning that leverages spatiotemporal jigsaw understanding. (2) We propose a novel constrained shuffling method to construct easy 3D jigsaws containing large LCCs. Four surrogate tasks are then formulated in place of the original jigsaw solving tasks. They are much more solvable yet remain effective in learning spatiotemporal discriminative representations. (3) Extensive experiments show that our approach achieves state-ofthe-art on two downstream tasks across various benchmarks.
2 RELATED WORK
Self-supervised Learning with Pretext Tasks Self-supervised learning (SSL) typically employs a pretext task to generate pseudo-labels for unlabeled data via some forms of data transformation. According to the transformations used by the pretext task, existing SSL methods for video presentation learning can be divided into three categories: (1) Spatial-Only Transformations: Derived from the original image domain (Gidaris et al., 2018), Jing et al. (2018) leveraged the spatial-only transformations for self-supervised video presentation learning. (2) Temporal-Only Transformations: Misra et al. (2016); Fernando et al. (2017); Lee et al. (2017); Wei et al. (2018) obtained shuffled video frames with the temporal-only transformations and then distinguished whether the shuffled frames are in chronological order. Xu et al. (2019) chose to shuffle video clips instead of frames. Benaim et al. (2020); Yao et al. (2020); Jenni et al. (2020) exploited the speed transformation via determining whether one video clip is accelerated. (3) Spatiotemporal Transformations: There are only a few recent approaches (Ahsan et al., 2019; Kim et al., 2019) that leveraged both spatial and temporal transformations by permuting 3D spatiotemporal cuboids. However, due to the aforementioned
intractability of solving the spatiotemporal jigsaw puzzles, they only leveraged either temporal or spatial permutations as training signals, i.e., they exploited the two domains independently. Therefore, no true spatiotemporal permutations have been considered in Ahsan et al. (2019); Kim et al. (2019). In contrast, given that both spatial appearances and temporal relations are important cues for video representation learning, the focus of this work is on investigating how to exploit the spatial and temporal continuity jointly for self-supervised video presentation learning. To that end, our Constrained Spatiotemporal Jigsaw (CSJ) presents the first spatiotemporal continuity based pretext task for video SSL, thanks to a novel constrained 3D jigsaw and four surrogate tasks to reason about the continuity in the 3D jigsaw puzzles without solving them directly.
Self-supervised Learning with Contrastive Learning Contrastive learning is another selfsupervised learning approach that has become increasingly popular in the image domain (Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020). Recently, it has been incorporated into video SSL as well. Contrastive learning and transformation based pretext tasks are orthogonal to each other and often combined in that different transformed versions of a data sample form the positive set used in contrastive learning. In El-Nouby et al. (2019); Knights et al. (2020); Qian et al. (2020); Wang et al. (2020); Yang et al. (2020), the positive/negative samples were generated based on temporal transformations only. In contrast, some recent works (Han et al., 2019; 2020; Zhuang et al., 2020) leveraged features from the future frame embeddings or with the memory bank (Wu et al., 2018). They modeled spatiotemporal representations using only contrastive learning without transformations. Contrastive learning is also exploited in one of our surrogate pretext tasks. Different from existing works, we explore the spatiotemporal transformations in the form of CSJ and employ contrastive learning to distinguish different levels of spatiotemporal continuity in shuffled jigsaws. This enables us to learn more discriminative spatiotemporal representations.
3 CONSTRAINED SPATIOTEMPORAL JIGSAW
3.1 PROBLEM DEFINITION
The main goal of self-supervised video representation learning is to learn a video feature representation function f(·) without using any human annotations. A general approach to achieving this goal is to generate a supervisory signal y from an unlabeled video clip x and construct a pretext task P to predict y from f(x). The process of solving the pretext task P encourages f(·) to learn discriminative spatiotemporal representations.
The pretext task P is constructed typically by applying to a video clip a transformation function t(·;θ) parameterized by θ and then automatically deriving y from θ, e.g., y can be the type of the transformation. Based on this premise, P is defined as the prediction of y using the feature map of the transformed video clip f(x̃), i.e., P : f(x̃) → y, where x̃ = t(x;θ). For example, in Lee et al. (2017), t(·;θ) denotes a temporal transformation that permutes the four frames of video clip x in a temporal order θ, x̃ = t(x;θ) is the shuffled clip, and the pseudo-label y is defined as the permutation order θ (e.g., 1324, 4312, etc.). The pretext task P is then a classification problem of 24 categories because there are 4! = 24 possible orders.
3.2 CONSTRAINED PERMUTATIONS
Solving spatiotemporal video jigsaw puzzles seems to be an ideal pretext task for learning discriminative representation as it requires an understanding of spatiotemporal continuity. After shuffling the pixels in a video clip using a 3D permutation matrix, the pretext task is to recover the permutation matrix. However, as explained earlier, this task is intractable given even moderate video clip sizes. Our solution is to introduce constraints on the permutations. As a result, a new pretext task PCSJ based on Constrained Spatiotemporal Jigsaw (see Fig. 2(a)) is formulated, which is much easier to solve than a random/unconstrained jigsaw.
Specifically, our goal is to introduce constraints to the permutations so that the resultant shuffled video clip is guaranteed to have large continuous cuboids (see Fig. 2(a)). Similar to humans (Stringer et al., 2006), having large continuous cuboids is key for a model to understand a 3D jigsaw and therefore to have any chance to solve it. Formally, the volume of a shuffled video clip x̃ are denoted as {T,H,W}, measuring its sizes along the temporal, height, and width dimensions, respectively. A cuboid is defined as a crop of x̃: c = x̃t1:t2,h1:h2,w1:w2 , where t1, t2 ∈ {1, 2, . . . , T}, h1, h2 ∈
{1, 2, . . . ,H}, w1, w2 ∈ {1, 2, . . . ,W}. If all the jigsaw pieces (smallest video clip unit, e.g. a pixel or a 3D pixel block) in c keep the same relative order as they were in x (before being shuffled), we call the cuboid c as a continuous cuboid ccont. The cuboid’s volume equals (t2 − t1)× (h2 − h1)× (w2 − w1), and the largest continuous cuboid (LCC) ccontmax is the ccont with the largest volume. We introduce two permutation strategies to ensure that the volumes of LCCs are large in relation to the whole video clip volume after our shuffling transformation t(·;θCSJ). First, instead of shuffling x in three spatiotemporal dimensions simultaneously, t(·;θCSJ) factorizes the permutations into the three spatiotemporal dimensions and then utilizes them sequentially to generate shuffled clips, e.g., in the order of T,W,H and only once. Note that the volume of the generated x̃ stays the same with different permutation orders (e.g., TWH and HTW ). Second, we shuffle a group of jigsaw pieces together instead of each piece individually along each dimension. Taking spatial shuffling as an example, if there are 8 pieces per frame (along each of the two spatial dimensions), θCSJ could be represented as the permutation from {12345678} to {84567123}. The longest and the secondlongest index ranges are: [2, 5] for coordinates {4567}, and [6, 8] for coordinates {123}. With these two permutation strategies, not only do we have large LCCs, but also they are guaranteed to have clearly separable boundaries (see Fig. 2(b)) with surrounding pieces due to the factorized and grouped permutation design. This means that they are easily detectable.
3.3 SURROGATE TASKS
Having permutation constraints preserves more spatiotemporal continuity in the shuffled clip and reduces the amount of possible permutations. But exploiting these constraints to make a neural sorting model tractable is still far from trivial. Instead of solving the jigsaw directly, our PCSJ is thus formulated as four surrogate tasks: Largest Continuous Cuboid Detection (LCCD), Clip Shuffling Pattern Classification (CSPC), Contrastive Learning over Shuffled Clips (CLSC), and Clip Continuity Measure Regression (CCMR). As illustrated in Fig. 2(b), given an unlabeled clip x, we first construct a mini-batch of 8 clips {x̃1, x̃2, ..., x̃8} by shuffling x with different but related constrained permutations (to be detailed later). These shuffled clips and the raw clip x are then fed into a 3D CNN model f(·) for spatiotemporal representation learning with a non-local operation (Wang et al., 2018):
fNL(x̃i) = NL(f(x̃i), f(x)), (1)
where NL(·, ·) denotes the non-local operator, and f(x̃i) and f(x) denote the feature map of x̃i and x from the last convolutional layer of f(·), respectively. The resultant feature map fNL(x̃i) is further passed through a spatial pooling layer followed by a separately fully-connected layer for
each surrogate task. Note that the raw video feature map f(x) is used as guidance through the nonlocal based attention mechanism to help fulfill the tasks. This is similar to humans needing to see the completed jigsaw picture to help solve the puzzle.
Before we detail the four tasks, we first explain how the eight permutations from the same raw clip are generated. First, the factorized and grouped permutations are applied to x to create one shuffled clip. By examining the largest and the second-largest continuous puzzle piece numbers of each dimension ({T,H,W}), we can easily identify the top-2 largest continuous cuboids (LCCs). Next, by varying the relative order of the top-2 LCCs either in the correct (original) order or the reverse order in each dimension, 2×2×2=8 permutations are obtained. By controlling the group size in permutation, we can make sure that the top-2 LCCs account for a large proportion, saying 80% of the total clip volume. Our four tasks are thus centered around these two LCCs as they largely determine the overall spatiotemporal continuity of the shuffled clip.
The first task LCCD is to locate the top-2 LCCs {ccontmax(j) : j = 1, 2} and formulated as a regression problem. Given a ground-truth LCC ccontmax(j), a Gaussian kernel is applied to its center to depict the possibility of each pixel in x̃ belonging to the LCC. This leads to a soft mask M jLCCD with the same
size of x̃: M jLCCD is all 0 outside the region of c cont max(j), and exp(−
||a− ac||2
2σ2g ) inside the region,
where a,ac denote any pixel and the center point, respectively. σg is the hyper-parameter which is set as 1 empirically. In the training stage, FPN (Lin et al., 2017) is used for multi-level feature fusion. LCCD is optimized using the MSE loss in each point:
LLCCD = ∑
j∈{1,2} ∑ a∈x̃ MSE(M jLCCD(a),M j LCCD(a) ′ ), (2)
where MSE(·, ·) denotes the MSE loss function, and M jLCCD(a) ′ is the prediction of each pixel a.
CSPC is designed to recognize the shuffling pattern of a shuffled clip. As mentioned early, the eight shuffled clips in each mini-batch are created from the same raw clip and differ only in the relative order of the top-2 LCCs along each of the three dimensions. There are thus eight permutations depending on the order (correct or reverse) in each dimension. Based on this understanding, CSPC is formulated as a multi-class classification task to recognize each shuffled clip into one of these eight classes, which is optimized using the Cross-Entropy (CE) loss:
LCSPC = ∑
i∈{0,1,...,7}
CE(lCSPC[i], l ′ CSPC[i]), (3)
where CE(·, ·) denotes the CE loss function and l ′
CSPC[i] is the predicted class label of i-th sample (shuffled clip) in each mini-batch.
The two tasks above emphasize on local spatiotemporal continuity understanding. In contrast, CLSC leverages the contrastive loss to encourage global continuity understanding. In particular, since the top-2 LCCs dominate the volume of a clip, it is safe to assume that if their relative order is correct in all three dimensions, the shuffled clip largely preserve continuity compared to the original clip, while all other 7 permutations feature large discontinuity in at least one dimension. We thus form a contrastive learning task with the original video x and the most continuous shuffled video x̃i as a positive pair, and x and the rest x̃j (j 6= i) as negative pairs. CLSC is optimized using the Noise Contrastive Estimation (NCE) (Tian et al., 2020) loss:
LCLSC = −log exp(sim(f(x), f(x̃i))/τ) exp(sim(f(x), f(x̃i))/τ) + ∑ j exp(sim(f(x), f(x̃j))/τ) , (4)
where sim(·, ·) is defined by the dot product: f(x)>f(x̃i), and τ is the temperature hyper-parameter. Note that the non-local operator is not used in CLSC.
CCMR is similar to CLSC in that it also enforces global continuity understanding, but differs in that it is a regression task aimed at predicting a global continuity measure. We consider two such measures. Since the total size of the top-2 LCCs {ccontmax(j) : j = 1, 2} is a good indicator of how continuous a shuffle video clip is, the first measure lld directly measures the relative total size of the top-2 LCCs: lld = v(ccontmax(1)) + v(c cont max(2))
v(x̃) , where v(·) represents the volume of a clip/cuboid.
The second measure lt/h/whd examines the shuffling degree of x̃ in each dimension, computed as the normalized hamming distance: hamming(x̃) Nc(Nc − 1)/2 , where hamming(·) denotes the hamming distance in each dimension between the original piece sequence and the permuted one, and Nc represents the number of pieces in each dimension so thatNc(Nc−1)/2 indicates the maximum possible hamming distance in the dimension. CCMR is optimized using the Mean Squared Error (MSE) loss:
LCCMR = MSE([lld, l t hd, l h hd, l w hd], [l
′ ld, l t′ hd, l h′ hd, l w′ hd ]), (5)
where l ′ ld, l t′ hd, l h′ hd, l w′ hd are the prediction of the model.
3.4 OVERALL LEARNING OBJECTIVE
Our entire CSJ framework is optimized end-to-end with the learning objective defined as:
L = σ1LLCCD + σ2LCSPC + σ3LCLSC + σ4LCCMR, (6)
where σ1, σ2, σ3, σ4 denote the weights for the four losses. We deploy the adaptive weighting mechanism (Kendall et al., 2018) to weight these tasks, and thus there is no free hyper-parameters to tune. We also adopt curriculum learning (Bengio et al., 2009; Korbar et al., 2018) to train our network by shuffling clips from easy to hard. More details are presented in Appendix. A.1 and A.2.
4 EXPERIMENTS
4.1 DATASETS AND SETTINGS
We select three benchmark datasets for performance evaluation: UCF101 (Soomro et al., 2012), HMDB51 (Kuehne et al., 2011), and Kinetics-400 (K400) (Kay et al., 2017), containing 13K/7K/306K video clips from 101/51/400 action classes, respectively. In the self-supervised pretraining stage, we utilize the first training split of UCF101/HMDB51 and the training split of K400 without using their labels. As in Han et al. (2020), we adopt R2D3D as the backbone network, which is modified from R3D (Hara et al., 2018) with fewer parameters. By fine-tuning the pre-trained model, we can evaluate the SSL performance on a downstream task (i.e., action classification). Following Han et al. (2019); He et al. (2020), two evaluation protocols are used: comparisons against state-of-the-arts follow the more popular fully fine-tuning evaluation protocol, but ablation analysis takes both the linear evaluation and fully fine-tuning protocols. For the experiments on supervised learning, we report top-1 accuracy on the first test split of UCF101/HMDB51 as the standard (Han et al., 2020). More details of the datasets are provided in Appendix B.
4.2 IMPLEMENTATION DETAILS
Raw videos in these datasets are decoded at a frame rate of 24-30 fps. From each raw video, we start from a randomly selected frame index and sample a consecutive 16-frame video clip with a temporal stride of 4. For data augmentation, we first resize the video frames to 128×171 pixels, from which we extract random crops of size 112×112 pixels. We also apply random horizontal flipping and random color jittering to the video frames during training. We exploit only the raw RGB video frames as input, and do not leverage optical flow or other auxiliary signals for self-supervised pretraining. We adopt the Adam optimizer with a weight decay of 10−3 and a batch size of 8 per GPU (with a total of 32 GPUs). We deploy cosine annealing learning rate with an initial value of 10−4 and 100 epochs. The jigsaw puzzle piece sizes of {T,H,W} dimensions are set as 1, 4, 4, respectively. A 16×112×112 video clip thus contains 16×28×28 pieces. We set the temperature hyper-parameter τ to 0.07. A dropout of 0.5 is applied to the final layer of each task. More implementation details of the fine-tuning and test evaluation stages can be found in Appendix B.
4.3 MAIN RESULTS
Comparison in Action Recognition A standard way to evaluate a self-supervised video representation learning model is to use it to initialize an action recognition model on a small dataset. Specifically, after self-supervised pre-training on UCF101/HMDB51/K400, we exploit the learned backbone for fully fine-tuning on UCF101 and HMDB51, following Han et al. (2020); Wang et al. (2020).
We consider one baseline: fully-supervised learning with pre-training on K400. Note that this baseline is commonly regarded as the upper bound of self-supervised representation learning (Alwassel et al., 2019). From Table 1, we have the following observations: (1) Our CSJ achieves state-of-theart performance on both UCF101 and HMDB51. Particularly, with the backbone R2D3D-18 that is weaker than R(2+1)D-18, our CSJ performs comparably w.r.t. Pace on UCF101 but achieves a 10% improvement over Pace on HMDB51. (2) By exploiting spatiotemporal transformations for self-supervised representation learning, our CSJ beats either methods with only temporal transformations (†) or methods with both spatial and temporal transformations (‡), as well as those learning spatiotemporal representations (∗) via only contrastive learning (w./o. spatiotemporal transformations). (3) Our CSJ also outperforms CBT (Sun et al., 2019), which used ten-times more massive datasets (K600 (Carreira et al., 2018) + Howto100M (Miech et al., 2019)) and multiple modalities (RGB+Audio). (4) Our CSJ is the closest to the fully-supervised one (upper bound), validating its effectiveness in self-supervised video representation learning.
Comparison in Video Retrieval We evaluate our CSJ method in the video retrieval task. Following Xu et al. (2019), we extract each video clips’ embeddings with the pre-training model and use each clip in the test set to query the k nearest clips in the training set. The comparative results in Table 2 show that our method outperforms all other self-supervised methods and achieves new state-of-the-art in video retrieval on UCF101. Particularly, our method beats the latest competitor
Tasks Linear Probe Fully Fine-tuning
PRP (Yao et al., 2020) on four out of five metrics. This indicates that our proposed CSJ is also effective for video representation learning in video retrieval.
4.4 FURTHER EVALUATIONS
Ablation Study We conduct ablative experiments to validate the effectiveness of four CSJ surrogate tasks and two additional learning strategies. From Table 3, we can observe that: (1) Selfsupervised learning with each of the four tasks shows better generalization than fine-tuning the network from scratch (random initialization). (2) By training over all the four tasks jointly, we can achieve large performance gains (see ‘+LCCD’ vs. ‘CCMR’). (3) Each additional learning strategy (i.e., adaptive weighting or curriculum learning) leads to a small boost to the performance by 0.3- 0.5%. (4) Our full model achieves a remarkable classification accuracy of 70.4%, demonstrating the effectiveness of our proposed CSJ with only the RGB video stream (without additional optical flow, audio, or text modalities). More ablative analysis can be found in Appendix D.
Visualization of Attention Maps Fig. 3 visualizes the attention map of the last feature maps from two models fine-tuned on UCF101 with or without adopting our self-supervised pre-training. Since each frame’s attention map involves four adjacent frames, it actually contains spatiotemporal semantic features. We can see that our self-supervised pre-training with CSJ indeed helps to better capture meaningful spatiotemporal information and thus recognize the action categories more correctly.
Visualization of LCCD Predictions We also demonstrate the visualization of the LCCD predictions from the pre-trained models in Fig. 4. We can observe that solving the LCCD task indeed enables the model to learn the locations of LCCs and understand spatiotemporal continuity, which is a key step towards video content understanding.
5 CONCLUSION
We have introduced a novel self-supervised video representation learning method named Constrained Spatiotemporal Jigsaw (CSJ). By introducing constrained permutations, our proposed CSJ is the first to leverage spatiotemporal jigsaw in self-supervised video representation learning. We also propose four surrogate tasks based on our constrained spatiotemporal jigsaws. They are designed to encourage a video representation model to understand the spatiotemporal continuity, a key building block towards video content analysis. Extensive experiments were carried out to validate the effectiveness of each of the four CSJ tasks and also show that our approach achieves the state-of-the-art on two downstream tasks across various benchmarks.
A ADDITIONAL LEARNING STRATEGIES
A.1 ADAPTIVE WEIGHT
Formally, our CSJ has two continuous outputs y1, y4 from LCCD and CCMR, and two discrete outputs y2, y3 from CSPC and CLSC, modeled with Gaussian likelihoods and softmax likelihoods, respectively. The joint loss for these four tasks L(W, σ1, σ2, σ3, σ4) is:
L(W, σ1, σ2, σ3, σ4)
= − logN (y1; fW(x), σ21) · − logN (y4; fW(x), σ24)
· softmax(y2=c; fW(x), σ2) · softmax(y3=c; fW(x), σ3)
= 1
2σ21 ||y1 − fW(x)||2 + log σ1 −
1
2σ24 ||y4 − fW(x)||2 + log σ4
− log p(y2|fW(x), σ2)− log p(y3|fW(x), σ3)
≈ 1 2σ21 L1(W) + 1 σ22 L2(W) + 1 σ23 L3(W) + 1 2σ24 L4(W)
+ log σ1 + log σ2 + log σ3 + log σ4,
(7)
where σ is the weight factor that can be automatically learned from the network, and the log likelihood for the output y is defined as:
log p(y = c|fW(x), σ) = 1 σ2 fWc (x)−log ∑ c′ exp( 1 σ2 fWc′ (x)). (8)
A.2 CURRICULUM LEARNING
We adopt curriculum learning (Korbar et al., 2018) to train our network by shuffling clips from easy to hard. Let d be the shuffle degree of a shuffled clip x̃, representing the number of continuous cuboids in each dimension. We gradually increase d from 3 to 5 during the training phase to produce more permuted clips. Note that when the video content is ambiguous in one dimension, e.g., a static video clip inflated from an image, there is no temporal variance to learn the transformation. Kim et al. (2019); Noroozi & Favaro (2016) also mentioned this problem as similar-looking ambiguity. To solve this problem, we calculate the variance on each dimension and set a threshold. If the variance is lower than the threshold, we decrease d from 3 to 1 so that the pieces are not shuffled in the corresponding dimension.
B DATASETS AND IMPLEMENTATION
B.1 DETAILS OF DATASETS
UCF101 (Soomro et al., 2012) is a widely-used dataset in the action recognition task, which contains 13,320 videos with 101 action classes. The dataset is divided into three training/testing splits. In this paper, following prior works (Wang et al., 2020; Han et al., 2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
HMDB51 (Kuehne et al., 2011) is a relatively small action recognition dataset, consisting of 6,766 videos with 51 categories. It is also divided into three training/testing splits. Following Wang et al. (2020); Han et al. (2020), we use the first training split as the pre-training dataset and the first testing split for evaluation.
Kinetics-400 (K400) (Kay et al., 2017) is a very large action recognition dataset consisting of 400 human action classes and around 306k videos. In this work, we use the training split of K400 as the pre-training dataset.
B.2 IMPLEMENTATION DETAILS
In the fine-tuning stage, weights of convolutional layers are initialized with self-supervised pretraining, but weights of fully-connected layers are randomly initialized. The whole network is then trained with the cross-entropy loss. The pre-processing and training strategies are the same as in the
self-supervised pre-training stage, except that the total epochs are 300 and the initial learning rate is 10−3. We use a batch size of 64 per GPU and a total of 8 GPUs for fine-tuning.
We follow the standard evaluation protocol (Han et al., 2020) during inference and use ten-crop to take the same sequence length as training from the video. The predicted label of each video is calculated by averaging the softmax probabilities of all clips in the video.
C NETWORK ARCHITECTURE
We deploy the same network backbone R2D3D as Han et al. (2019; 2020), which is a 3D-ResNet (R3D) similar to Hara et al. (2018). The only difference between R2D3D and R3D lies in that: R2D3D keeps the first two residual blocks as 2D convolutional blocks while R3D uses 3D blocks. Therefore, the modified R2D3D has fewer parameters (only the last two blocks are 3D convolutions). We present the CNN structure of R2D3D in Table 4.
D ADDITIONAL ABLATION STUDIES
D.1 LCCD
Instead of predicting center points using the detection method, we also design a segmentation method – largest continuous cuboid segmentation (LCCS) to predicts the location of top-2 LCCs {ccontmax(j) : j = 1, 2}. The difference between LCCD and LCCS lies in that: LCCS is formulated as a segmentation task to discriminate whether a pixel is in the region of ccontmax(j). Concretely, LCCS predicts a binary mask M jLCCS where only points in the region of {c cont max(j) are set to be 1, otherwise 0. As a result, LCCS is optimized using the Cross Entropy (CE) loss at each point:
LLCCS = ∑
j∈{1,2} ∑ a∈x̃ CE(M jLCCS(a),M j LCCS(a) ′ ), (9)
where CE(·, ·) denotes the CE loss function, and M jLCCS(a) ′ is the predicted class of pixel a.
We report the performance of four different designs of LCCD in Table 5: (1) LCCS: LCCS is used instead of LCCD. (2) LCCD+MLCCS: The Gaussian mask MLCCD is substituted by the binary mask MLCCS, but the LCCD task is optimized using the MSE loss. (3) LCCD + L1: The LCCD task is
optimized by the L1 loss. (4) LCCD + MSE: The LCCD task is optimized by the MSE loss. From Table 5, it can be seen that the segmentation task also helps self-supervised representation learning but doesn’t perform as well as LCCD. Also, under the three different settings of LCCD, the MSE loss with the Gaussian map performs the best.
D.2 CLSC
Table 6 above shows the accuracies obtained with different temperatures τ used in contrastive learning. We can observe that: (1) When τ is in the range 1 ∼ 0.07, the accuracy increases with smaller τ . (2) When τ is large (e.g., 1), the accuracy drops considerably. In this work, τ is set to 0.0.
D.3 CSPC
In addition to our CSPC with 8 pattern categories (see Sec. 3.3), we consider another two designs: (1) 2 Categories: the shuffled clip is discriminated by whether it has the same relative order of the top-2 LCCs as the raw clip. It is almost the same as CLSC but is optimized by the CE loss. (2) 4 Categories: the shuffled clip is discriminated by how it differs from the raw clip: non-difference, spatial-only difference, temporal-only difference, spatiotemporal difference. From Table 7, we can see that CSPC with 8 categories outperforms the other two designs. These results support our motivation for leveraging spatiotemporal transformations.
D.4 CCMR
We report the performance of three different designs of CCMR: (1) ld: the learning degree lld is used as supervision, which only contains volume information. (2) hd: the hamming distances lthd, l h hd, l w hd are used, which contain only the relative order information. (3) ld + hd: both ld and hd are used as supervision. From Table 8, we can see that: First, both ld and hd help the model to learn continuous characteristics during pre-training, and hd outperforms ld by a small margin. Second, our CCMR learns the best representation by combining ld and hd.
D.5 RESULTS OF DIRECTLY SOLVING CSJ
We also demonstrate the results of solving the CSJ task directly in Table 9. We randomly shuffle video clips into 4 × 4 × 4 jigsaw puzzles. To recognize the correct permutation, the model solve a (4! × 4! × 4!)-way classification task in the pre-training stage. We compare the CSJ task with the joint LCCD+CCMR task under the same setting for fair comparison. Linear evaluation is adopted to show the effectiveness of different tasks. We can observe from the table that solving LCCD+CCMR jointly is more effective than solving CSJ directly.
E TEMPORAL ACTION SEGMENTATION
To show the effectiveness of our CSJ for solving new downstream tasks, we apply the pretrained model obtained by our CSJ to temporal action segmentation, which is more challenging than the
conventional action recognition and retrieval tasks. Specifically, we choose to compare our CSJ model with the latest competitor MemDPC (Han et al., 2020) on the Breakfast dataset (Kuehne et al., 2014). For fair comparison, our CSJ model and the MemDPC model adopt the same R2D3D34 backbone. Due the time constraint, from the original Breakfast dataset, we only use a small subset of 200 long videos as the training set for fine-tuning, and select a few long videos for the test. For temporal action segmentation, we follow the overall framework of MS-TCN (Abu Farha & Gall, 2019), but changes its backbone to R2D3D-34 pretrained by our CSJ or MemDPC.
We present the qualitative results on two test videos in Fig. 5. We can clearly observe that our CSJ outperforms MemDPC on both test videos. Particularly, the predictions of our CSJ are much closer to the ground truth, but MemDPC tends to produce unwanted segments for temporal action segmentation: it wrongly recognizes the segment (color in yellow) in the middle part of the first video as ‘Pour Milk’, and the segment (color in black) in the last part of the second video as ‘Stir Coffee’. In conclusion, as compared to the latest SSVRL method MemDPC, our CSJ can learn more robust features for temporal action segmentation due to its ‘true’ spatiotemporal jigsaw understanding. | 1. What is the main contribution of the paper regarding self-supervised video representation learning?
2. What are the strengths of the proposed approach, particularly in solving the intractable 3D jigsaw puzzle problem?
3. Do you have any questions or concerns about how the puzzle pieces are grouped?
4. How do you think the artificial patterns in the shuffled clips may affect the method's efficacy?
5. What are some missing ablation experiments and analysis that could further support the work's claims?
6. Could you provide more details on why the number of largest continuous cuboids was chosen to be two?
7. Can you explain the purpose of using non-local operations between permuted and original features, and what might happen if they were removed?
8. How does the performance of the surrogate tasks relate to the downstream task performance?
9. Are there any minor suggestions or comments you have regarding the paper's presentation or content? | Review | Review
Summary
In this paper, the authors extend the self-supervised 2D jigsaw puzzle solving idea to 3D for self-supervised video representation learning. To make the 3D jigsaw puzzle problem tractable, they propose a two-fold idea. First, they constrain the 3D jigsaw puzzle solution space by factorizing the permutations into time, x, and y dimensions and by grouping pieces. Second, since the constrained 3D jigsaw is still intractable, they propose four surrogate tasks of the 3D jigsaw: 1) LLCD (detecting largest continuous cuboid), 2) CSPC (3D permutation pattern classification), 3) CLSC (contrastive learning over permuted clips), 4) CCMR (measuring the global continuity of the permuted clips) They evaluate their method's efficacy on the public benchmarks by following linear/finetuning self-supervised learning evaluation protocols.
Strengths
I like the idea of solving a 3D jigsaw puzzle as a pretext spatio-temporal learning task. By learning to solve the 3D jigsaw puzzle, the learned representations could be discriminative for the downstream tasks. Solving the jigsaw puzzle as a pretext task for representation learning is already explored and shown to be effective both in 2D spatial for images [Noroozi & Favaro, ECCV 2016] and 1D temporal dimension for videos [Xu et al., CVPR2019]. Nevertheless, due to the problem's intractability, there is no such prior work on solving a jigsaw puzzle for video representation learning. Therefore, I think this work is valuable as the authors make the problem tractable, and they show the efficacy of the 3D jigsaw puzzle solving.
Weaknesses and suggestions
However, I have several concerns about the work.
I do not understand how the puzzle pieces are grouped exactly. The authors show an example of grouped permutation: {12345678} -> {84567123}. It is confusing to me. It seems that the groups are {123}, {4567}, {8} from the original sequence. However, how do we make these groups? Is the group sizes always 1,3,4 for length-8 sequences? I suggest the authors provide more details on how they group the pieces.
Artificial patterns in the shuffled clips might be problematic. In contrast to the 2D jigsaw [Noroozi & Favaro, ECCV 2016] and 1D jigsaw [Xu et al., CVPR2019], the backbone encoder in this work takes the shuffled clips with artificial patterns (see the Fig. 1(c). There are vertical and horizontal lines). It is unlikely to see these artificial patterns in the downstream tasks. There is a training-testing mismatch. Finetuning might fix the problem, but it is not guaranteed. I want to listen to the authors' opinions on this issue.
Missing ablation experiments and analysis. I list the missing analyses below.
Why the number of largest continuous cuboids are two? What happens if it is one, three, or four?
They use non-local operation between the permuted and the original features to guide the surrogate tasks. It would be informative to show the performance when we remove this part. Also, I am not quite sure why FPN is used only for LCCD. What happens if we do not use FPN for LCCD?
Analysis of the correlation between surrogate task performance and the downstream task performance.
I will increase my rating if the majority of concerns are resolved.
Minor comments
For me, Table 3 is a bit hard to parse. |
ICLR | Title
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Abstract
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics’ initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors’ initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
1 INTRODUCTION
Recently, deep reinforcement learning (RL) has achieved multiple breakthroughs in a range of challenging domains (e.g. Silver et al. (2016); Berner et al. (2019); Andrychowicz et al. (2020b); Vinyals et al. (2019)). A part of this success is related to an ever-growing toolbox of tricks and methods that were observed to boost the RL algorithms’ performance (e.g. Hessel et al. (2018); Haarnoja et al. (2018b); Fujimoto et al. (2018); Wang et al. (2020); Osband et al. (2019)). This state of affairs benefits the field but also brings challenges related to often unclear interactions between the individual improvements and the credit assignment related to the overall performance of the algorithm Andrychowicz et al. (2020a); Ilyas et al. (2020).
In this paper, we present a comprehensive empirical study of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. These are presented in Section 4 and Appendix B. Our insights include:
• The normally distributed action noise, commonly used for exploration, hinders training. • The current state-of-the-art methods are unstable under several stability criteria. • The critics’ initialization plays a major role in ensemble-based actor-critic exploration, while
the training is mostly invariant to the actors’ initialization. • The approximated posterior sampling exploration (Osband et al., 2013) outperforms approx-
imated UCB exploration combined with weighted Bellman backup (Lee et al., 2020). • The weighted Bellman backup (Lee et al., 2020) can not replace the clipped double Q-
Learning (Fujimoto et al., 2018). To address some of the issues listed above, we introduce the Ensemble Deep Deterministic Policy Gradient (ED2) algorithm1, see Section 3. ED2 brings together existing RL tools in a novel way: it is
1Our code is based on SpinningUp (Achiam, 2018). We open-source it at: https://github.com/ ed2-paper/ED2.
an off-policy algorithm for continuous control, which constructs an ensemble of streamlined versions of TD3 agents and achieves the state-of-the-art performance in OpenAI Gym MuJoCo, substantially improving the results on the two hardest tasks – Ant and Humanoid. Consequently, ED2 does not require knowledge outside of the existing RL toolbox, is conceptually straightforward, and easy to code.
2 BACKGROUND
We model the environment as a Markov Decision Process (MDP). It is defined by the tuple (S,A, R, P, γ, p0), where S is a continuous multi-dimensional state space, A denotes a continuous multi-dimensional action space, P is a transition kernel, γ ∈ [0, 1) stands for a discount factor, p0 refers to an initial state distribution, and R is a reward function. The agent learns a policy from sequences of transitions τ = [(st, at, rt, st+1, d)]Tt=0, called episodes or trajectories, where at ∼ π(·|st), st+1 ∼ P (·|st, at), rt = R(st, at, st+1), d is a terminal signal, and T is the terminal time-step. A stochastic policy π(a|s) maps each state to a distribution over actions. A deterministic policy µ : S −→ A assigns each state an action. All algorithms that we consider in this paper use a different policy for collecting data (exploration) and a different policy for evaluation (exploitation). In order to keep track of the progress, the evaluation runs are performed every ten thousand environment interactions. Because of the environments’ stochasticity, we run the evaluation policy multiple times. Let {Ri}Ni=1 be a set of (undiscounted) returns from N evaluation episodes {τi}Ni=1, i.e. Ri = ∑ rt∈τi rt. We evaluate the
policy using the average test return R̄ = 1N ∑N i=1Ri and the standard deviation of the test returns
σ = √
1 N−1 ∑N i=1(Ri − R̄)2.
We run experiments on four continuous control tasks and their variants, introduced in the appropriate sections, from the OpenAI Gym MuJoCo suite (Brockman et al., 2016) presented in Figure 1. The agent observes vectors that describe the kinematic properties of the robot and its actions specify torques to be applied on the robot joints. See Appendix D for the details on the experimental setup.
3 ENSEMBLE DEEP DETERMINISTIC POLICY GRADIENTS
For completeness of exposition, we present ED2 before the experimental section. The ED2 architecture is based on an ensemble of Streamlined Off-Policy (SOP) agents (Wang et al., 2020), meaning that our agent is an ensemble of TD3-like agents (Fujimoto et al., 2018) with the action normalization and the ERE replay buffer. The pseudo-code listing can be found in Algorithm 1, while the implementation details, including a more verbose version of pseudo-code (Algorithm 3), can be found in Appendix E. In the data collection phase (Lines 1-9), ED2 selects one actor from the ensemble uniformly at random (Lines 1 and 9) and run its deterministic policy for the course of one episode (Line 4). In the evaluation phase (not shown in Algorithm 1), the evaluation policy averages all the actors’ output actions. We train the ensemble every 50 environment steps with 50 stochastic gradient descent updates (Lines 10-13). ED2 concurrently learns K · 2 Q-functions, Qφk,1 and Qφk,2 where k ∈ K, by mean square Bellman error minimization, in almost the same way that SOP learns its two Q-functions. The only difference is that we have K critic pairs that are initialized with different random weights and then trained independently with the same batches of data. Because of the different initial weights, each Q-function has a different bias in its Q-values. The K actors, πθk , train maximizing their corresponding first critic, Qφk,1 , just like SOP.
Algorithm 1 ED2 - Ensemble Deep Deterministic Policy Gradients Input: init. params for policy θk and Q-functions φk,1, φk,2, k ∈ [1...K]; replay buffer D;
1: Sample the current policy index c ∼ U([1...K]). 2: Reset the environment and observe the state s. 3: repeat 4: Execute action a = µθc(s) . µ uses the action normalization 5: Observe and store (s, a, r, s′, d) in the replay buffer D. 6: Set s← s′ 7: if episode is finished then 8: Reset the environment and observe initial state s. 9: Sample the current policy index c ∼ U([1...K]).
10: if time to update then 11: for as many as steps done in the environment do 12: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 13: Update the parameters θk, φk,1 and φk,2 by one gradient step. 14: until convergence
Utilizing the ensembles requires several design choices, which we summarize below. The ablation study of ED2 elements is provided in Appendix C.
Ensemble
Used: We train the ensemble of 5 actors and 5 critics; each actor learns from its own critic and the whole ensemble is trained on the same data.
Not used: We considered different actor-critic configurations, initialization schemes and relations, as well as the use of random prior networks (Osband et al., 2018), data bootstrap (Osband et al., 2016), and different ensemble sizes. We also change the SOP network sizes and training intensity instead of using the ensemble. Besides the prior networks in some special cases, these turn out to be inferior as shown in Section 4 and Appendix B.1.
Exploration
Used: We pick one actor uniformly at random to collect the data for the course of one episode. The actor is deterministic (no additive action noise is applied). These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).
Not used: We tested several approaches to exploration: using the ensemble of actors, UCB (Lee et al., 2020), and adding the action noise in different proportions. These experiments are presented in Appendix B.2.
Exploitation
Used: The evaluation policy averages all the actors’ output actions to provide stable performance.
Not used: We tried picking an action with the biggest value estimate (average of the critics’ Qfunctions) in evaluation (Huang et al., 2017).
Interestingly, both policies had similar results, see Appendix B.3.
Action normalization
Used: We use the action normalization introduced by Wang et al. (2020).
Not used: We experimented with the observations and rewards normalization, which turned out to be unnecessary. The experiments are presented in Appendix B.4.
Q-function updates
Used: We do 50 SGD updates (ADAM optimizer (Kingma and Ba, 2015), MSE loss) to the actors and the critics every 50 environment interactions, use Clipped Double Q-Learning (Fujimoto et al., 2018).
Not used: We also examined doing the updates at the end of each episode (with the proportional number of updates), using the Hubert loss, and doing weighted Bellman backups (Lee et al., 2020). However, we found them to bring no improvement to our method, as presented in Appendix B.5.
4 EXPERIMENTS
In this section, we present our comprehensive study and the resulting insights. The rest of the experiments verifying that our design choices perform better than alternatives are in Appendix B. Unless stated otherwise, a solid line in the figures represents an average, while a shaded region shows a 95% bootstrap confidence interval. We used 30 seeds for ED2 and the baselines and 7 seeds for the ED2 variants.
4.1 THE NORMALLY DISTRIBUTED ACTION NOISE, COMMONLY USED FOR EXPLORATION, HINDERS TRAINING
In this experiment, we deprive SOP of its exploration mechanism, namely additive normal action noise, and call this variant deterministic SOP (det. SOP). It causes relatively minor deterioration in the Humanoid performance, has no significant influence on the Hopper or Walker performance, and substantially improves the Ant performance, see Figure 2. This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training.
ED2 leverages this insight and constructs an ensemble of deterministic SOP agents presented in Section 3. Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.
ED2 achieves state-of-the-art performance on the OpenAI Gym MuJoCo suite. Figure 4 shows the results of ED2 contrasted with three strong baselines: SUNRISE (Lee et al., 2020), SOP (Wang et al., 2020), and SAC(Haarnoja et al., 2018b).
For completeness, we plot the Humanoid velocities in Figure 5 which shows that our method accelerates to a much higher velocity than the baselines.
4.2 THE CURRENT STATE-OF-THE-ART METHODS ARE UNSTABLE UNDER SEVERAL STABILITY CRITERIA
We consider three notions of stability: inference stability, asymptotic performance stability, and training stability. ED2 outperforms baselines in each of these notions, as discussed below. Similar metrics were also studied in Chan et al. (2020).
Inference stability We say that an agent is inference stable if, when run multiple times, it achieves similar test performance every time. We measure inference stability using the standard deviation of test returns explained in Section 2. We found that that the existing methods train policies that are surprisingly sensitive to the randomness in the environment initial conditions2. Figure 4 and Figure 6 show that ED2 successfully mitigates this problem. By the end of the training, ED2 produces results within 1% of the average performance on Humanoid, while the performance of SUNRISE, SOP, and SAC may vary as much as 11%.
2The MuJoCo suite is overall deterministic, nevertheless, little stochasticity is injected at the beginning of each trajectory, see Appendix D for details.
Asymptotic performance stability We say that an agent achieves asymptotic performance stability if it achieves similar test performance across multiple training runs starting from different initial networks weights. Figure 7 shows that ED2 has a significantly smaller variance than the other methods while maintaining high performance.
Training stability We will consider training stable if performance does not severely deteriorate from one evaluation to the next. We define the root mean squared deterioration metric (RMSD) as follows:
RMSD = √√√√ 1 M M∑ i=1 ( max(R̄i−20 − R̄i, 0) )2 ,
where M is the number of the evaluation phases during training and R̄i is the average test return at the i-th evaluation phase (described in Section 2). We compare returns 20 evaluation phases apart to ensure that the deterioration in performance doesn’t stem from the evaluation variance. ED2 has the lowest RMSD across all tasks, see Figure 8.
4.3 THE CRITICS’ INITIALIZATION PLAYS A MAJOR ROLE IN ENSEMBLE-BASED ACTOR-CRITIC EXPLORATION, WHILE THE TRAINING IS MOSTLY INVARIANT TO THE
ACTORS’ INITIALIZATION
In this experiment, actors’ weights are initialized with the same random values (contrary to the standard case of different initialization). Moreover, we test a corresponding case with critics’ weights initialized with the same random values or simply training only a single critic.
Figure 9 indicates that the choice of actors initialization does not matter in all tasks but Humanoid. Although the average performance on Humanoid seems to be better, it is also less stable. This is quite interesting because the actors are deterministic. Therefore, the exploration must come from the fact that each actor is trained to optimize his own critic.
On the other, Figure 9 shows that the setup with the single critic severely impedes the agent performance. We suspect that using the single critic impairs the agent exploration capabilities as its actors’ policies, trained to maximize the same critic’s Q-function, become very similar.
4.4 THE APPROXIMATED POSTERIOR SAMPLING EXPLORATION OUTPERFORMS APPROXIMATED UCB EXPLORATION COMBINED WITH WEIGHTED BELLMAN BACKUP
ED2 uses posterior sampling based exploration method (Osband et al., 2016). SUNRISE, on the other hand, approximates the Upper Confidence Bound (UCB) exploration technique and does weighted Bellman backups (Lee et al., 2020). For the fair comparison between ED2 and SUNRISE, we substitute the SUNRISE base algorithm SAC for the SOP algorithm used by ED2. We call this variant SUNRISE-SOP.
We test both methods on the standard MuJoCo benchmarks as well as delayed (Zheng et al., 2018a) and sparse (Plappert et al., 2018) rewards variants. Both variations make the environments harder from the exploration standpoint. In the delayed version, the rewards are accumulated and returned to the agent only every 10 time-steps. In the sparse version, the reward for the forward motion is returned to the agent only after it crosses the threshold of one unit on the x-axis. For a better perspective, a fully trained Humanoid is able to move to around five units until the end of the episode. All the other reward components (living reward, control cost, and contact cost) remain unchanged. The results are presented in Table 1.
ED2 outperforms the non-ensemble method SOP, supporting the argument of coherent and temporallyextended exploration of ED2. Moreover, we observe that performance in MuJoCo environments benefits from the ED2 approximate Bayesian posterior sampling exploration (Osband et al., 2013) in contrast to the approximated UCB in SUNRISE, which follows the OFU principle. The posterior sampling is proved to be theoretically superior to the OFU strategy (Osband and Van Roy, 2017).
The experiment where the ED2’s exploration mechanism is replaced for UCB is in Appendix B.2. This variant also achieves worse results than ED2. The additional exploration efficiency experiment in the custom Humanoid environment, where an agent has to find and reach a goal position, is in Appendix A.
4.5 THE WEIGHTED BELLMAN BACKUP CAN NOT REPLACE THE CLIPPED DOUBLE Q-LEARNING
We applied the weighted Bellman backups proposed by Lee et al. (2020) to our method. It is suggested that the method mitigates error propagation in Q-learning by re-weighting the Bellman backup based on uncertainty estimates from an ensemble of target Q-functions (i.e. variance of predictions). Interestingly, Figure 10 does not show this positive effect on ED2.
Our method uses clipped double Q-Learning to mitigate overestimation in Q-functions (Fujimoto et al., 2018). We wanted to check if it is required and if it can be exchanged for the weighted Bellman backups used by Lee et al. (2020). Figure 11 shows that clipped double Q-Learning is required and that the weighted Bellman backups can not replace it.
5 RELATED WORK
Off-policy RL Recently, multiple deep RL algorithms for continuous control have been proposed, e.g. DDPG (Lillicrap et al., 2016), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018b), SOP (Wang et al., 2020), SUNRISE (Lee et al., 2020). They provide a variety of methods for improving training quality, including double-Q bias reduction van Hasselt et al. (2016), target policy smoothing or different update frequencies for actor and critic Fujimoto et al. (2018), entropy regularization Haarnoja et al. (2018b), action normalization Wang et al. (2020), prioritized experience replay Wang et al. (2020), weighted Bellman backups Kumar et al. (2020); Lee et al. (2020), and use of ensembles Osband et al. (2019); Lee et al. (2020); Kurutach et al. (2018); Chua et al. (2018).
Ensembles Deep ensembles are a practical approximation of a Bayesian posterior, offering improved accuracy and uncertainty estimation Lakshminarayanan et al. (2017); Fort et al. (2019). They
inspired a variety of methods in deep RL. They are often used for temporally-extended exploration; see the next paragraph. Other than that, ensembles of different TD-learning algorithms were used to calculate better Q-learning targets (Chen et al., 2018). Others proposed to combine the actions and value functions of different RL algorithms Wiering and van Hasselt (2008) or the same algorithm with different hyper-parameters Huang et al. (2017). For mixing the ensemble components, complex self-adaptive confidence mechanisms were proposed in Zheng et al. (2018b). Our method is simpler: it uses the same algorithm with the same hyper-parameters without any complex or learnt mixing mechanism. Lee et al. (2020) proposed a unified framework for ensemble learning in deep RL (SUNRISE) which uses bootstrap with random initialization Osband et al. (2016) similarly to our work. We achieve better results than SUNRISE and show in Appendix B that their UCB exploration and weighted Bellman backups do not aid our algorithm performance.
Exploration Various frameworks have been developed to balance exploration and exploitation in RL. The optimism in the face of uncertainty principle Lai and Robbins (1985); Bellemare et al. (2016) assigns an overly optimistic value to each state-action pair, usually in the form of an exploration bonus reward, to promote visiting unseen areas of the environment. The maximum entropy method Haarnoja et al. (2018a) encourages the policy to be stochastic, hence boosting exploration. In the parameter space approach Plappert et al. (2018); Fortunato et al. (2018), noise is added to the network weights, which can lead to temporally-extended exploration and a richer set of behaviours. Posterior sampling Strens (2000); Osband et al. (2016; 2018) methods have similar motivations. They stem from the Bayesian perspective and rely on selecting the maximizing action among sampled and statistically plausible set of action values. The ensemble approach Lowrey et al. (2018); Miłoś et al. (2019); Lee et al. (2020) trains multiple versions of the agent, which yields a diverse set of behaviours and can be viewed as an instance of posterior sampling RL.
6 CONCLUSIONS
We conduct a comprehensive empirical analysis of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. We believe that the findings can be useful to RL researchers. Additionally, we propose Ensemble Deep Deterministic Policy Gradients (ED2), an ensemble-based off-policy RL algorithm, which achieves state-of-the-art performance and addresses several issues found during the aforementioned study.
7 REPRODUCIBILITY STATEMENT
We have made a significant effort to make our results reproducible. We use 30 random seeds, which is above the currently popular choice in the field (up to 5 seeds). Furthermore, we systematically explain our design choices in Section 3 and we provide a detailed pseudo-code of our method in Algorithm 3 in the Appendix B. Additionally, we open-sourced the code for the project3 together with examples of how to reproduce the main experiments. The implementation details are explained in Appendix E and extensive information about the experimental setup is given in Appendix D.
3https://github.com/ed2-paper/ED2
A EXPLORATION EFFICIENCY IN THE CUSTOM HUMANOID ENVIRONMENT
To check the exploration capabilities of our method, we constructed two environments based on Humanoid where the goal is not only to move forward as fast as possible but to find and get to the specific region. The environments are described in Figure 12.
Because the Humanoid initial state is slightly perturbed every run, we compare solved rates over multiple runs, see details in Appendix D. Figure 13 compares the solved rates of our method and the three baselines. Our method outperforms the baselines. For this experiment, our method uses the prior networks (Osband et al., 2018).
B DESIGN CHOICES
In this section, we summarize the empirical evaluation of various design choices grouped by topics related to an ensemble of agents (B.1), exploration (B.2), exploitation (B.3), normalization (B.4), and Q-function updates (B.5). In the plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in case of ED2 (ours) and 7 seeds otherwise. All of these experiments test ED2 presented in Section 3 with Algorithm 2 used for evaluation (the ensemble critic variant). We call Algorithm 2 a ’vote policy’.
Algorithm 2 Vote policy 1: Input: ensemble size K; policy θk and Q-function φk,1 parameters where k ∈ [1, . . . ,K]; max
action scale M ; 2: function VOTE_POLICY(s, c)
ak = M tanh (µθk(s)) for k ∈ [1, . . . ,K] (1)
3: if use arbitrary critic then
qk = Qφc,1(s, ak) for k ∈ [1, . . . ,K] (2)
4: else use ensemble critic
qk = 1
K ∑ i∈[1...K] Qφi,1(s, ak) for k ∈ [1, . . . ,K] (3)
5: return ak for k = argmaxk qk
B.1 ENSEMBLE
Prior networks We tested if our algorithm can benefit from prior networks (Osband et al., 2018). It turned out that the results are very similar on OpenAI Gym MuJoCo tasks, see Figure 14. However, the prior networks are useful on our crafted hard-exploration Humanoid environments, see Figure 15.
Ensemble size Figure 16 shows ED2 with different ensemble sizes. As can be seen, the ensemble of size 5 (which we use in ED2) achieves good results, striking a balance between performance and computational overhead.
Data bootstrap Osband et al. (2016) and Lee et al. (2020) remark that training an ensemble of agents using the same training data but with different initialization achieves, in most cases, better performance than applying different training samples to each agent. We confirm this observation in Figure 17. Data bootstrap assigned each transition to each agent in the ensemble with 50% probability.
SOP bigger networks and training intensity We checked if simply training SOP with bigger networks or with higher training intensity (a number of updates made for each collected transition) can get it close to the ED2 results. Figure 18 compares ED2 to SOP with different network sizes, while Figure 19 compares ED2 to SOP with one or five updates per environment step. It turns out that bigger networks or higher training intensity does not improve SOP performance.
B.2 EXPLORATION
Vote policy In this experiment, we used the so-called "vote policy" described in Algorithm 2. We use it for action selection in step 5 of Algorithm 3 in two variations: (1) where the random critic, chosen for the duration of one episode, evaluates each actor’s action or (2) with the full ensemble of critics for actors actions evaluation. Figure 20 shows that the arbitrary critic is not much different from our method. However, in the case of the ensemble critic, we observe a significant performance drop suggesting deficient exploration.
UCB We tested the UCB exploration method from Lee et al. (2020). This method defines an upper-confidence bound (UCB) based on the mean and variance of Q-functions in an ensemble and selects actions with the highest UCB for efficient exploration. Figure 21 shows that the UCB exploration method makes the results of our algorithm worse.
Gaussian noise While our method uses ensemble-based temporally coherent exploration, the most popular choice of exploration is injecting i.i.d. noise (Fujimoto et al., 2018; Wang et al., 2020). We evaluate if these two approaches can be used together. We used Gaussian noise with the standard deviation of 0.29, it is the default value in Wang et al. (2020). We found that the effects are taskspecific, barely visible for Hopper and Walker, positive in the case of Humanoid, and negative for Ant – see Figure 22. In a more refined experiment, we varied the noise level. With more noise the Humanoid results are better, whereas the And results are worse – see Figure 23.
B.3 EXPLOITATION
We used the vote policy, see Algorithm 2, as the evaluation policy in step 21 of Algorithm 3. Figure 24 shows that the vote policy does worse on the OpenAI Gym MuJoCo tasks. However, on our custom Humanoid tasks introduced in Section 4, it improves our agent performance – see Figure 25.
B.4 NORMALIZATION
We validated if rewards or observations normalization (Andrychowicz et al., 2020a) help our method. In both cases, we keep the empirical mean and standard deviation of each reward/observation coordinate, based on all rewards/observations seen so far, and normalize rewards/observations by subtracting the empirical mean and dividing by the standard deviation. It turned out that only the observations normalization significantly helps the agent on Humanoid, see Figures 26 and 27. The action normalization influence is tested in Appendix C.
B.5 Q-FUNCTION UPDATES
Huber loss We tried using the Huber loss for the Q-function training. It makes the results on all tasks worse, see Figure 28.
C ABLATION STUDY
In this section, we ablate the ED2 components to see their impact on performance and stability. We start with the ensemble exploration and exploitation and then move on to the action normalization and the ERE replay buffer. In all plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in all but action normalization and ERE replay buffer experiments, where we run 7 seeds.
Exploration & Exploitation In the first experiment we wanted to isolate the effect of ensemblebased temporally coherent exploration on the performance and stability of ED2. Figures 29-32 compare the performance and stability of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for evaluation in step 21 of Algorithm 3. It is worth noting that the action selection during the data collection, step 5 in Algorithm 3, is left unchanged – the ensemble of actors is used for exploration and each actor is trained on all the data. This should isolate the effect of exploration on
the test performance of every actor. The results show that the performance improvement and stability of ED2 does not come solely from the efficient exploration. ED2 ablation performs comparably to the baseline and is even less stable.
In the next experiment, we wanted to check if the ensemble evaluation is all we need in that event. Figure 33 compares the performance of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for the data collection in step 5 of Algorithm 3. The action selection during the evaluation, step 21 in Algorithm 3, is left unchanged – the ensemble of actors is trained on the data collected only by one of the actors. We add Gaussian noise to the single actor’s actions for exploration as described in Appendix B.2. The results show that the ensemble actor test performance collapses, possibly because of training on the out of distribution data. This implies that the ensemble of actors, used for evaluation, improves the test performance and stability. However, it is required that the same ensemble of actors is also used for exploration, during the data collection.
Action normalization The implementation details of the action normalization are described in Appendix E. Figure 34 shows that the action normalization is especially required on the Ant and Humanoid environments, while not disrupting the training on the other tasks.
ERE replay buffer The implementation details of the ERE replay buffer are described in Appendix E. In Figure 35 we observe that it improves the final performance of ED2 on all tasks, especially on Walker2d and Humanoid.
D EXPERIMENTAL SETUP
Plots In all evaluations, we used 30 evaluation episodes to better access the average performance of each policy, as described in Section 2. For a more pleasant look and easier visual assessment, we smoothed the lines using an exponential moving average with a smoothing factor equal 0.4.
OpenAI Gym MuJoCo In MuJoCo environments, that we used, a state is defined by (x, y, z) position and velocity of the robot’s root, and angular position and velocity of each of its joints. The observation holds almost all information from the state except the x and y position of the robot’s root. The action is a torque that should be applied to each joint of the robot. Sizes of those spaces for each environment are summarised in Table 2.
MuJoCo is a deterministic physics engine thus all simulations conducted inside it are deterministic. This includes simulations of our environments. However, to simplify the process of data gathering and to counteract over-fitting the authors of OpenAI Gym decided to introduce some stochasticity. Each episode starts from a slightly different state - initial positions and velocities are perturbed with random noise (uniform or normal depending on the particular environment).
E IMPLEMENTATION DETAILS
Architecture and hyper-parameters In our experiments, we use deep neural networks with two hidden layers, each of them with 256 units. All of the networks use ReLU as an activation, except on the final output layer, where the activation used varies depending on the model: critic networks use no activation, while actor networks use tanh() multiplied by the max action scale. Table 3 shows the hyper-parameters used for the tested algorithms.
Action normalization Our algorithm employs action normalization proposed by Wang et al. (2020). It means that before applying the squashing function (e.g. tanh()), the outputs of each actor network are normalized in the following way: let µ = (µ1, . . . , µA) be the output of the actor’s network and let G = ∑A i=1 |µi|/A be the average magnitude of this output, where A is the action’s dimensionality. If G > 1 then we normalize the output by setting µi to µi/G for all i = 1, . . . , A. Otherwise, we leave the output unchanged. Each actor’s outputs are normalized independently from other actors in the ensemble.
Algorithm 3 ED2 - Ensemble Deep Deterministic Policy Gradients Input: ensemble sizeK; init. policy θk andQ-functions φk,1, φk,2 param. where k ∈ [1, . . . ,K]; replay buffer D; max action scale M ; target smoothing std. dev. σ; interpolation factor ρ;
1: Set the target parameters φ̄k,1 ← φk,1, φ̄k,2 ← φk,2 2: Sample the current policy index c ∼ U([1, . . . ,K]). 3: Reset the environment and observe the state s. 4: repeat 5: Execute action a = M tanh (µθc(s)) . µ uses the action normalization 6: Observe and store (s, a, r, s′, d) in the replay buffer D. 7: Set s← s′ 8: if episode is finished then 9: Reset the environment and observe initial state s. 10: Sample the current policy index c ∼ U([1, . . . ,K]). 11: if time to update then 12: for as many as steps done in the environment do 13: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 14: Compute targets
yk(r, s ′, d) = r + γ(1− d) min
i=1,2 Qφ̄k,i(s
′, a′k)
a′k = M tanh (µθk(s ′) + ) , ∼ N (0, σ)
15: Update the Q-functions by one step of gradient descent using
∇φk,i 1 |B| ·K ∑
(s,a,r,s′,d)∈B
( Qφk,i(s, a)− yk(r, s′, d) )2 for i ∈ {1, 2}, k ∈ [1, . . . ,K]
16: Update the policies by one step of gradient ascent using
∇θk 1 |B| ·K ∑ s∈B Qφk,1(s, µθk(s)) for k ∈ [1, . . . ,K]
17: Update target parameters with
φ̄k,i ← ρφ̄k,i + (1− ρ)φk,i for i ∈ {1, 2}, k ∈ [1, . . . ,K]
18: if time to evaluate then 19: for specified number of evaluation runs do 20: Reset the environment and observe the state s. 21: Execute policy a = 1K ∑K i=1M tanh (µθi(s)) until the terminal state. 22: Record and log the return. 23: until convergence
Emphasizing Recent Experience We implement the Emphasizing Recent Experience (ERE) mechanism from Wang et al. (2020). ERE samples non-uniformly from the most recent experiences stored in the replay buffer. Let B be the number of mini-batch updates and |D| be the size of the replay buffer. When performing the gradient updates, we sample from the most recent cb data points stored in the replay buffer, where cb = |D| · ηb 1000 B for b = 1, . . . , B.
The hyper-parameter η starts off with a set value of η0 and is later adapted based on the improvements in the agent training performance. Let Irecent be the improvement in terms of training episode returns made over the last |D|/2 time-steps and Imax be the maximum of such improvements over the course of the training. We adapt η according to the formula:
η = η0 · Irecent Imax + 1− Irecent Imax
Our implementation uses the exponentially weighted moving average to store the value of Irecent. More concretely, we define Irecent based on two additional parameters Rrecent and Rprev so that Irecent = Rrecent −Rprev . Those parameters are then updated whenever we receive a new training episode return ep_ret:
Rrecent = λrecent · ep_ret+ (1− λrecent) ·Rrecent Rprev = λprev · ep_ret+ (1− λprev) ·Rprev
where λprev = T/b |D|2 c, λrecent = 10 · λprev and T is the maximum length of an episode.
Hardware During the training of our models, we employ only CPUs using a cluster where each node has 28 available cores of 2.6 GHz, alongside at least 64 GB of memory. The running time of a typical experiment did not exceed 24 hours. | 1. What is the focus of the paper in terms of research area and contributions?
2. What are the strengths of the proposed approach, particularly regarding its ability to achieve state-of-the-art results?
3. Do you have any concerns or suggestions regarding the experimental design or comparisons made in the paper?
4. How does the reviewer assess the novelty and impact of the paper's findings? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents an empirical study evaluating the commonly accepted design choice in off-policy Deep RL algorithms in continuous control settings. The use of additive exploration noise, initialization choices, update frequency, and precision for retraining are tested empirically highlighting some interesting results. The paper also introduces ED2 - an ensemble method utilizing the design choices from the study which is demonstrated to achieve SOTA results on Mujoco benchmarks.
Review
The paper is very well-written, flows smoothly and is a pleasure to read. The ideas are well articulated and clear.
Result in Fig 2 is surprising as it suggests that the additive normal action noise is entirely unnecessary. This has been a fixture is most DRL algorithms. However, looking in Appendix B.2, the authors did not test Ornstein-Uhlenbeck noise (DDPG paper) as one of the baselines. Adding this popular choice would complete this empirical evaluation. I do not think that OU noise would change the conclusions, but it would be nice to include for the sake of completeness - considering that it is used in some of the seminal works in the field.
The results for the stability experiments in Fig 6-8 on inference stability, asymptotic performance stability and training stability do not seem that surprising to me. Wouldn’t an ensemble method fundamentally lead to more stable learning? SUNRISE seems to be an exception here for some domains. However, ED2 being more ‘stable’ that a lone network algorithm (like SAC baseline) run seems to be rather obvious to me. An ensemble method should fundamentally be more stable as it has the advantage of N=5 random initializations. Would perhaps a better comparison be against an ensemble of SAC runs?
The main contribution of the paper are (1) the empirical study into the various design choices within DRL algorithms used for continuous control settings and (2) an ensemble approach that integrates these learnings. While the ensemble method achieves SOTA results on many tasks and the empirical study presents some riveting results, the novelty in the paper is quite limited. |
ICLR | Title
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Abstract
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics’ initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors’ initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
1 INTRODUCTION
Recently, deep reinforcement learning (RL) has achieved multiple breakthroughs in a range of challenging domains (e.g. Silver et al. (2016); Berner et al. (2019); Andrychowicz et al. (2020b); Vinyals et al. (2019)). A part of this success is related to an ever-growing toolbox of tricks and methods that were observed to boost the RL algorithms’ performance (e.g. Hessel et al. (2018); Haarnoja et al. (2018b); Fujimoto et al. (2018); Wang et al. (2020); Osband et al. (2019)). This state of affairs benefits the field but also brings challenges related to often unclear interactions between the individual improvements and the credit assignment related to the overall performance of the algorithm Andrychowicz et al. (2020a); Ilyas et al. (2020).
In this paper, we present a comprehensive empirical study of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. These are presented in Section 4 and Appendix B. Our insights include:
• The normally distributed action noise, commonly used for exploration, hinders training. • The current state-of-the-art methods are unstable under several stability criteria. • The critics’ initialization plays a major role in ensemble-based actor-critic exploration, while
the training is mostly invariant to the actors’ initialization. • The approximated posterior sampling exploration (Osband et al., 2013) outperforms approx-
imated UCB exploration combined with weighted Bellman backup (Lee et al., 2020). • The weighted Bellman backup (Lee et al., 2020) can not replace the clipped double Q-
Learning (Fujimoto et al., 2018). To address some of the issues listed above, we introduce the Ensemble Deep Deterministic Policy Gradient (ED2) algorithm1, see Section 3. ED2 brings together existing RL tools in a novel way: it is
1Our code is based on SpinningUp (Achiam, 2018). We open-source it at: https://github.com/ ed2-paper/ED2.
an off-policy algorithm for continuous control, which constructs an ensemble of streamlined versions of TD3 agents and achieves the state-of-the-art performance in OpenAI Gym MuJoCo, substantially improving the results on the two hardest tasks – Ant and Humanoid. Consequently, ED2 does not require knowledge outside of the existing RL toolbox, is conceptually straightforward, and easy to code.
2 BACKGROUND
We model the environment as a Markov Decision Process (MDP). It is defined by the tuple (S,A, R, P, γ, p0), where S is a continuous multi-dimensional state space, A denotes a continuous multi-dimensional action space, P is a transition kernel, γ ∈ [0, 1) stands for a discount factor, p0 refers to an initial state distribution, and R is a reward function. The agent learns a policy from sequences of transitions τ = [(st, at, rt, st+1, d)]Tt=0, called episodes or trajectories, where at ∼ π(·|st), st+1 ∼ P (·|st, at), rt = R(st, at, st+1), d is a terminal signal, and T is the terminal time-step. A stochastic policy π(a|s) maps each state to a distribution over actions. A deterministic policy µ : S −→ A assigns each state an action. All algorithms that we consider in this paper use a different policy for collecting data (exploration) and a different policy for evaluation (exploitation). In order to keep track of the progress, the evaluation runs are performed every ten thousand environment interactions. Because of the environments’ stochasticity, we run the evaluation policy multiple times. Let {Ri}Ni=1 be a set of (undiscounted) returns from N evaluation episodes {τi}Ni=1, i.e. Ri = ∑ rt∈τi rt. We evaluate the
policy using the average test return R̄ = 1N ∑N i=1Ri and the standard deviation of the test returns
σ = √
1 N−1 ∑N i=1(Ri − R̄)2.
We run experiments on four continuous control tasks and their variants, introduced in the appropriate sections, from the OpenAI Gym MuJoCo suite (Brockman et al., 2016) presented in Figure 1. The agent observes vectors that describe the kinematic properties of the robot and its actions specify torques to be applied on the robot joints. See Appendix D for the details on the experimental setup.
3 ENSEMBLE DEEP DETERMINISTIC POLICY GRADIENTS
For completeness of exposition, we present ED2 before the experimental section. The ED2 architecture is based on an ensemble of Streamlined Off-Policy (SOP) agents (Wang et al., 2020), meaning that our agent is an ensemble of TD3-like agents (Fujimoto et al., 2018) with the action normalization and the ERE replay buffer. The pseudo-code listing can be found in Algorithm 1, while the implementation details, including a more verbose version of pseudo-code (Algorithm 3), can be found in Appendix E. In the data collection phase (Lines 1-9), ED2 selects one actor from the ensemble uniformly at random (Lines 1 and 9) and run its deterministic policy for the course of one episode (Line 4). In the evaluation phase (not shown in Algorithm 1), the evaluation policy averages all the actors’ output actions. We train the ensemble every 50 environment steps with 50 stochastic gradient descent updates (Lines 10-13). ED2 concurrently learns K · 2 Q-functions, Qφk,1 and Qφk,2 where k ∈ K, by mean square Bellman error minimization, in almost the same way that SOP learns its two Q-functions. The only difference is that we have K critic pairs that are initialized with different random weights and then trained independently with the same batches of data. Because of the different initial weights, each Q-function has a different bias in its Q-values. The K actors, πθk , train maximizing their corresponding first critic, Qφk,1 , just like SOP.
Algorithm 1 ED2 - Ensemble Deep Deterministic Policy Gradients Input: init. params for policy θk and Q-functions φk,1, φk,2, k ∈ [1...K]; replay buffer D;
1: Sample the current policy index c ∼ U([1...K]). 2: Reset the environment and observe the state s. 3: repeat 4: Execute action a = µθc(s) . µ uses the action normalization 5: Observe and store (s, a, r, s′, d) in the replay buffer D. 6: Set s← s′ 7: if episode is finished then 8: Reset the environment and observe initial state s. 9: Sample the current policy index c ∼ U([1...K]).
10: if time to update then 11: for as many as steps done in the environment do 12: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 13: Update the parameters θk, φk,1 and φk,2 by one gradient step. 14: until convergence
Utilizing the ensembles requires several design choices, which we summarize below. The ablation study of ED2 elements is provided in Appendix C.
Ensemble
Used: We train the ensemble of 5 actors and 5 critics; each actor learns from its own critic and the whole ensemble is trained on the same data.
Not used: We considered different actor-critic configurations, initialization schemes and relations, as well as the use of random prior networks (Osband et al., 2018), data bootstrap (Osband et al., 2016), and different ensemble sizes. We also change the SOP network sizes and training intensity instead of using the ensemble. Besides the prior networks in some special cases, these turn out to be inferior as shown in Section 4 and Appendix B.1.
Exploration
Used: We pick one actor uniformly at random to collect the data for the course of one episode. The actor is deterministic (no additive action noise is applied). These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).
Not used: We tested several approaches to exploration: using the ensemble of actors, UCB (Lee et al., 2020), and adding the action noise in different proportions. These experiments are presented in Appendix B.2.
Exploitation
Used: The evaluation policy averages all the actors’ output actions to provide stable performance.
Not used: We tried picking an action with the biggest value estimate (average of the critics’ Qfunctions) in evaluation (Huang et al., 2017).
Interestingly, both policies had similar results, see Appendix B.3.
Action normalization
Used: We use the action normalization introduced by Wang et al. (2020).
Not used: We experimented with the observations and rewards normalization, which turned out to be unnecessary. The experiments are presented in Appendix B.4.
Q-function updates
Used: We do 50 SGD updates (ADAM optimizer (Kingma and Ba, 2015), MSE loss) to the actors and the critics every 50 environment interactions, use Clipped Double Q-Learning (Fujimoto et al., 2018).
Not used: We also examined doing the updates at the end of each episode (with the proportional number of updates), using the Hubert loss, and doing weighted Bellman backups (Lee et al., 2020). However, we found them to bring no improvement to our method, as presented in Appendix B.5.
4 EXPERIMENTS
In this section, we present our comprehensive study and the resulting insights. The rest of the experiments verifying that our design choices perform better than alternatives are in Appendix B. Unless stated otherwise, a solid line in the figures represents an average, while a shaded region shows a 95% bootstrap confidence interval. We used 30 seeds for ED2 and the baselines and 7 seeds for the ED2 variants.
4.1 THE NORMALLY DISTRIBUTED ACTION NOISE, COMMONLY USED FOR EXPLORATION, HINDERS TRAINING
In this experiment, we deprive SOP of its exploration mechanism, namely additive normal action noise, and call this variant deterministic SOP (det. SOP). It causes relatively minor deterioration in the Humanoid performance, has no significant influence on the Hopper or Walker performance, and substantially improves the Ant performance, see Figure 2. This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training.
ED2 leverages this insight and constructs an ensemble of deterministic SOP agents presented in Section 3. Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.
ED2 achieves state-of-the-art performance on the OpenAI Gym MuJoCo suite. Figure 4 shows the results of ED2 contrasted with three strong baselines: SUNRISE (Lee et al., 2020), SOP (Wang et al., 2020), and SAC(Haarnoja et al., 2018b).
For completeness, we plot the Humanoid velocities in Figure 5 which shows that our method accelerates to a much higher velocity than the baselines.
4.2 THE CURRENT STATE-OF-THE-ART METHODS ARE UNSTABLE UNDER SEVERAL STABILITY CRITERIA
We consider three notions of stability: inference stability, asymptotic performance stability, and training stability. ED2 outperforms baselines in each of these notions, as discussed below. Similar metrics were also studied in Chan et al. (2020).
Inference stability We say that an agent is inference stable if, when run multiple times, it achieves similar test performance every time. We measure inference stability using the standard deviation of test returns explained in Section 2. We found that that the existing methods train policies that are surprisingly sensitive to the randomness in the environment initial conditions2. Figure 4 and Figure 6 show that ED2 successfully mitigates this problem. By the end of the training, ED2 produces results within 1% of the average performance on Humanoid, while the performance of SUNRISE, SOP, and SAC may vary as much as 11%.
2The MuJoCo suite is overall deterministic, nevertheless, little stochasticity is injected at the beginning of each trajectory, see Appendix D for details.
Asymptotic performance stability We say that an agent achieves asymptotic performance stability if it achieves similar test performance across multiple training runs starting from different initial networks weights. Figure 7 shows that ED2 has a significantly smaller variance than the other methods while maintaining high performance.
Training stability We will consider training stable if performance does not severely deteriorate from one evaluation to the next. We define the root mean squared deterioration metric (RMSD) as follows:
RMSD = √√√√ 1 M M∑ i=1 ( max(R̄i−20 − R̄i, 0) )2 ,
where M is the number of the evaluation phases during training and R̄i is the average test return at the i-th evaluation phase (described in Section 2). We compare returns 20 evaluation phases apart to ensure that the deterioration in performance doesn’t stem from the evaluation variance. ED2 has the lowest RMSD across all tasks, see Figure 8.
4.3 THE CRITICS’ INITIALIZATION PLAYS A MAJOR ROLE IN ENSEMBLE-BASED ACTOR-CRITIC EXPLORATION, WHILE THE TRAINING IS MOSTLY INVARIANT TO THE
ACTORS’ INITIALIZATION
In this experiment, actors’ weights are initialized with the same random values (contrary to the standard case of different initialization). Moreover, we test a corresponding case with critics’ weights initialized with the same random values or simply training only a single critic.
Figure 9 indicates that the choice of actors initialization does not matter in all tasks but Humanoid. Although the average performance on Humanoid seems to be better, it is also less stable. This is quite interesting because the actors are deterministic. Therefore, the exploration must come from the fact that each actor is trained to optimize his own critic.
On the other, Figure 9 shows that the setup with the single critic severely impedes the agent performance. We suspect that using the single critic impairs the agent exploration capabilities as its actors’ policies, trained to maximize the same critic’s Q-function, become very similar.
4.4 THE APPROXIMATED POSTERIOR SAMPLING EXPLORATION OUTPERFORMS APPROXIMATED UCB EXPLORATION COMBINED WITH WEIGHTED BELLMAN BACKUP
ED2 uses posterior sampling based exploration method (Osband et al., 2016). SUNRISE, on the other hand, approximates the Upper Confidence Bound (UCB) exploration technique and does weighted Bellman backups (Lee et al., 2020). For the fair comparison between ED2 and SUNRISE, we substitute the SUNRISE base algorithm SAC for the SOP algorithm used by ED2. We call this variant SUNRISE-SOP.
We test both methods on the standard MuJoCo benchmarks as well as delayed (Zheng et al., 2018a) and sparse (Plappert et al., 2018) rewards variants. Both variations make the environments harder from the exploration standpoint. In the delayed version, the rewards are accumulated and returned to the agent only every 10 time-steps. In the sparse version, the reward for the forward motion is returned to the agent only after it crosses the threshold of one unit on the x-axis. For a better perspective, a fully trained Humanoid is able to move to around five units until the end of the episode. All the other reward components (living reward, control cost, and contact cost) remain unchanged. The results are presented in Table 1.
ED2 outperforms the non-ensemble method SOP, supporting the argument of coherent and temporallyextended exploration of ED2. Moreover, we observe that performance in MuJoCo environments benefits from the ED2 approximate Bayesian posterior sampling exploration (Osband et al., 2013) in contrast to the approximated UCB in SUNRISE, which follows the OFU principle. The posterior sampling is proved to be theoretically superior to the OFU strategy (Osband and Van Roy, 2017).
The experiment where the ED2’s exploration mechanism is replaced for UCB is in Appendix B.2. This variant also achieves worse results than ED2. The additional exploration efficiency experiment in the custom Humanoid environment, where an agent has to find and reach a goal position, is in Appendix A.
4.5 THE WEIGHTED BELLMAN BACKUP CAN NOT REPLACE THE CLIPPED DOUBLE Q-LEARNING
We applied the weighted Bellman backups proposed by Lee et al. (2020) to our method. It is suggested that the method mitigates error propagation in Q-learning by re-weighting the Bellman backup based on uncertainty estimates from an ensemble of target Q-functions (i.e. variance of predictions). Interestingly, Figure 10 does not show this positive effect on ED2.
Our method uses clipped double Q-Learning to mitigate overestimation in Q-functions (Fujimoto et al., 2018). We wanted to check if it is required and if it can be exchanged for the weighted Bellman backups used by Lee et al. (2020). Figure 11 shows that clipped double Q-Learning is required and that the weighted Bellman backups can not replace it.
5 RELATED WORK
Off-policy RL Recently, multiple deep RL algorithms for continuous control have been proposed, e.g. DDPG (Lillicrap et al., 2016), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018b), SOP (Wang et al., 2020), SUNRISE (Lee et al., 2020). They provide a variety of methods for improving training quality, including double-Q bias reduction van Hasselt et al. (2016), target policy smoothing or different update frequencies for actor and critic Fujimoto et al. (2018), entropy regularization Haarnoja et al. (2018b), action normalization Wang et al. (2020), prioritized experience replay Wang et al. (2020), weighted Bellman backups Kumar et al. (2020); Lee et al. (2020), and use of ensembles Osband et al. (2019); Lee et al. (2020); Kurutach et al. (2018); Chua et al. (2018).
Ensembles Deep ensembles are a practical approximation of a Bayesian posterior, offering improved accuracy and uncertainty estimation Lakshminarayanan et al. (2017); Fort et al. (2019). They
inspired a variety of methods in deep RL. They are often used for temporally-extended exploration; see the next paragraph. Other than that, ensembles of different TD-learning algorithms were used to calculate better Q-learning targets (Chen et al., 2018). Others proposed to combine the actions and value functions of different RL algorithms Wiering and van Hasselt (2008) or the same algorithm with different hyper-parameters Huang et al. (2017). For mixing the ensemble components, complex self-adaptive confidence mechanisms were proposed in Zheng et al. (2018b). Our method is simpler: it uses the same algorithm with the same hyper-parameters without any complex or learnt mixing mechanism. Lee et al. (2020) proposed a unified framework for ensemble learning in deep RL (SUNRISE) which uses bootstrap with random initialization Osband et al. (2016) similarly to our work. We achieve better results than SUNRISE and show in Appendix B that their UCB exploration and weighted Bellman backups do not aid our algorithm performance.
Exploration Various frameworks have been developed to balance exploration and exploitation in RL. The optimism in the face of uncertainty principle Lai and Robbins (1985); Bellemare et al. (2016) assigns an overly optimistic value to each state-action pair, usually in the form of an exploration bonus reward, to promote visiting unseen areas of the environment. The maximum entropy method Haarnoja et al. (2018a) encourages the policy to be stochastic, hence boosting exploration. In the parameter space approach Plappert et al. (2018); Fortunato et al. (2018), noise is added to the network weights, which can lead to temporally-extended exploration and a richer set of behaviours. Posterior sampling Strens (2000); Osband et al. (2016; 2018) methods have similar motivations. They stem from the Bayesian perspective and rely on selecting the maximizing action among sampled and statistically plausible set of action values. The ensemble approach Lowrey et al. (2018); Miłoś et al. (2019); Lee et al. (2020) trains multiple versions of the agent, which yields a diverse set of behaviours and can be viewed as an instance of posterior sampling RL.
6 CONCLUSIONS
We conduct a comprehensive empirical analysis of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. We believe that the findings can be useful to RL researchers. Additionally, we propose Ensemble Deep Deterministic Policy Gradients (ED2), an ensemble-based off-policy RL algorithm, which achieves state-of-the-art performance and addresses several issues found during the aforementioned study.
7 REPRODUCIBILITY STATEMENT
We have made a significant effort to make our results reproducible. We use 30 random seeds, which is above the currently popular choice in the field (up to 5 seeds). Furthermore, we systematically explain our design choices in Section 3 and we provide a detailed pseudo-code of our method in Algorithm 3 in the Appendix B. Additionally, we open-sourced the code for the project3 together with examples of how to reproduce the main experiments. The implementation details are explained in Appendix E and extensive information about the experimental setup is given in Appendix D.
3https://github.com/ed2-paper/ED2
A EXPLORATION EFFICIENCY IN THE CUSTOM HUMANOID ENVIRONMENT
To check the exploration capabilities of our method, we constructed two environments based on Humanoid where the goal is not only to move forward as fast as possible but to find and get to the specific region. The environments are described in Figure 12.
Because the Humanoid initial state is slightly perturbed every run, we compare solved rates over multiple runs, see details in Appendix D. Figure 13 compares the solved rates of our method and the three baselines. Our method outperforms the baselines. For this experiment, our method uses the prior networks (Osband et al., 2018).
B DESIGN CHOICES
In this section, we summarize the empirical evaluation of various design choices grouped by topics related to an ensemble of agents (B.1), exploration (B.2), exploitation (B.3), normalization (B.4), and Q-function updates (B.5). In the plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in case of ED2 (ours) and 7 seeds otherwise. All of these experiments test ED2 presented in Section 3 with Algorithm 2 used for evaluation (the ensemble critic variant). We call Algorithm 2 a ’vote policy’.
Algorithm 2 Vote policy 1: Input: ensemble size K; policy θk and Q-function φk,1 parameters where k ∈ [1, . . . ,K]; max
action scale M ; 2: function VOTE_POLICY(s, c)
ak = M tanh (µθk(s)) for k ∈ [1, . . . ,K] (1)
3: if use arbitrary critic then
qk = Qφc,1(s, ak) for k ∈ [1, . . . ,K] (2)
4: else use ensemble critic
qk = 1
K ∑ i∈[1...K] Qφi,1(s, ak) for k ∈ [1, . . . ,K] (3)
5: return ak for k = argmaxk qk
B.1 ENSEMBLE
Prior networks We tested if our algorithm can benefit from prior networks (Osband et al., 2018). It turned out that the results are very similar on OpenAI Gym MuJoCo tasks, see Figure 14. However, the prior networks are useful on our crafted hard-exploration Humanoid environments, see Figure 15.
Ensemble size Figure 16 shows ED2 with different ensemble sizes. As can be seen, the ensemble of size 5 (which we use in ED2) achieves good results, striking a balance between performance and computational overhead.
Data bootstrap Osband et al. (2016) and Lee et al. (2020) remark that training an ensemble of agents using the same training data but with different initialization achieves, in most cases, better performance than applying different training samples to each agent. We confirm this observation in Figure 17. Data bootstrap assigned each transition to each agent in the ensemble with 50% probability.
SOP bigger networks and training intensity We checked if simply training SOP with bigger networks or with higher training intensity (a number of updates made for each collected transition) can get it close to the ED2 results. Figure 18 compares ED2 to SOP with different network sizes, while Figure 19 compares ED2 to SOP with one or five updates per environment step. It turns out that bigger networks or higher training intensity does not improve SOP performance.
B.2 EXPLORATION
Vote policy In this experiment, we used the so-called "vote policy" described in Algorithm 2. We use it for action selection in step 5 of Algorithm 3 in two variations: (1) where the random critic, chosen for the duration of one episode, evaluates each actor’s action or (2) with the full ensemble of critics for actors actions evaluation. Figure 20 shows that the arbitrary critic is not much different from our method. However, in the case of the ensemble critic, we observe a significant performance drop suggesting deficient exploration.
UCB We tested the UCB exploration method from Lee et al. (2020). This method defines an upper-confidence bound (UCB) based on the mean and variance of Q-functions in an ensemble and selects actions with the highest UCB for efficient exploration. Figure 21 shows that the UCB exploration method makes the results of our algorithm worse.
Gaussian noise While our method uses ensemble-based temporally coherent exploration, the most popular choice of exploration is injecting i.i.d. noise (Fujimoto et al., 2018; Wang et al., 2020). We evaluate if these two approaches can be used together. We used Gaussian noise with the standard deviation of 0.29, it is the default value in Wang et al. (2020). We found that the effects are taskspecific, barely visible for Hopper and Walker, positive in the case of Humanoid, and negative for Ant – see Figure 22. In a more refined experiment, we varied the noise level. With more noise the Humanoid results are better, whereas the And results are worse – see Figure 23.
B.3 EXPLOITATION
We used the vote policy, see Algorithm 2, as the evaluation policy in step 21 of Algorithm 3. Figure 24 shows that the vote policy does worse on the OpenAI Gym MuJoCo tasks. However, on our custom Humanoid tasks introduced in Section 4, it improves our agent performance – see Figure 25.
B.4 NORMALIZATION
We validated if rewards or observations normalization (Andrychowicz et al., 2020a) help our method. In both cases, we keep the empirical mean and standard deviation of each reward/observation coordinate, based on all rewards/observations seen so far, and normalize rewards/observations by subtracting the empirical mean and dividing by the standard deviation. It turned out that only the observations normalization significantly helps the agent on Humanoid, see Figures 26 and 27. The action normalization influence is tested in Appendix C.
B.5 Q-FUNCTION UPDATES
Huber loss We tried using the Huber loss for the Q-function training. It makes the results on all tasks worse, see Figure 28.
C ABLATION STUDY
In this section, we ablate the ED2 components to see their impact on performance and stability. We start with the ensemble exploration and exploitation and then move on to the action normalization and the ERE replay buffer. In all plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in all but action normalization and ERE replay buffer experiments, where we run 7 seeds.
Exploration & Exploitation In the first experiment we wanted to isolate the effect of ensemblebased temporally coherent exploration on the performance and stability of ED2. Figures 29-32 compare the performance and stability of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for evaluation in step 21 of Algorithm 3. It is worth noting that the action selection during the data collection, step 5 in Algorithm 3, is left unchanged – the ensemble of actors is used for exploration and each actor is trained on all the data. This should isolate the effect of exploration on
the test performance of every actor. The results show that the performance improvement and stability of ED2 does not come solely from the efficient exploration. ED2 ablation performs comparably to the baseline and is even less stable.
In the next experiment, we wanted to check if the ensemble evaluation is all we need in that event. Figure 33 compares the performance of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for the data collection in step 5 of Algorithm 3. The action selection during the evaluation, step 21 in Algorithm 3, is left unchanged – the ensemble of actors is trained on the data collected only by one of the actors. We add Gaussian noise to the single actor’s actions for exploration as described in Appendix B.2. The results show that the ensemble actor test performance collapses, possibly because of training on the out of distribution data. This implies that the ensemble of actors, used for evaluation, improves the test performance and stability. However, it is required that the same ensemble of actors is also used for exploration, during the data collection.
Action normalization The implementation details of the action normalization are described in Appendix E. Figure 34 shows that the action normalization is especially required on the Ant and Humanoid environments, while not disrupting the training on the other tasks.
ERE replay buffer The implementation details of the ERE replay buffer are described in Appendix E. In Figure 35 we observe that it improves the final performance of ED2 on all tasks, especially on Walker2d and Humanoid.
D EXPERIMENTAL SETUP
Plots In all evaluations, we used 30 evaluation episodes to better access the average performance of each policy, as described in Section 2. For a more pleasant look and easier visual assessment, we smoothed the lines using an exponential moving average with a smoothing factor equal 0.4.
OpenAI Gym MuJoCo In MuJoCo environments, that we used, a state is defined by (x, y, z) position and velocity of the robot’s root, and angular position and velocity of each of its joints. The observation holds almost all information from the state except the x and y position of the robot’s root. The action is a torque that should be applied to each joint of the robot. Sizes of those spaces for each environment are summarised in Table 2.
MuJoCo is a deterministic physics engine thus all simulations conducted inside it are deterministic. This includes simulations of our environments. However, to simplify the process of data gathering and to counteract over-fitting the authors of OpenAI Gym decided to introduce some stochasticity. Each episode starts from a slightly different state - initial positions and velocities are perturbed with random noise (uniform or normal depending on the particular environment).
E IMPLEMENTATION DETAILS
Architecture and hyper-parameters In our experiments, we use deep neural networks with two hidden layers, each of them with 256 units. All of the networks use ReLU as an activation, except on the final output layer, where the activation used varies depending on the model: critic networks use no activation, while actor networks use tanh() multiplied by the max action scale. Table 3 shows the hyper-parameters used for the tested algorithms.
Action normalization Our algorithm employs action normalization proposed by Wang et al. (2020). It means that before applying the squashing function (e.g. tanh()), the outputs of each actor network are normalized in the following way: let µ = (µ1, . . . , µA) be the output of the actor’s network and let G = ∑A i=1 |µi|/A be the average magnitude of this output, where A is the action’s dimensionality. If G > 1 then we normalize the output by setting µi to µi/G for all i = 1, . . . , A. Otherwise, we leave the output unchanged. Each actor’s outputs are normalized independently from other actors in the ensemble.
Algorithm 3 ED2 - Ensemble Deep Deterministic Policy Gradients Input: ensemble sizeK; init. policy θk andQ-functions φk,1, φk,2 param. where k ∈ [1, . . . ,K]; replay buffer D; max action scale M ; target smoothing std. dev. σ; interpolation factor ρ;
1: Set the target parameters φ̄k,1 ← φk,1, φ̄k,2 ← φk,2 2: Sample the current policy index c ∼ U([1, . . . ,K]). 3: Reset the environment and observe the state s. 4: repeat 5: Execute action a = M tanh (µθc(s)) . µ uses the action normalization 6: Observe and store (s, a, r, s′, d) in the replay buffer D. 7: Set s← s′ 8: if episode is finished then 9: Reset the environment and observe initial state s. 10: Sample the current policy index c ∼ U([1, . . . ,K]). 11: if time to update then 12: for as many as steps done in the environment do 13: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 14: Compute targets
yk(r, s ′, d) = r + γ(1− d) min
i=1,2 Qφ̄k,i(s
′, a′k)
a′k = M tanh (µθk(s ′) + ) , ∼ N (0, σ)
15: Update the Q-functions by one step of gradient descent using
∇φk,i 1 |B| ·K ∑
(s,a,r,s′,d)∈B
( Qφk,i(s, a)− yk(r, s′, d) )2 for i ∈ {1, 2}, k ∈ [1, . . . ,K]
16: Update the policies by one step of gradient ascent using
∇θk 1 |B| ·K ∑ s∈B Qφk,1(s, µθk(s)) for k ∈ [1, . . . ,K]
17: Update target parameters with
φ̄k,i ← ρφ̄k,i + (1− ρ)φk,i for i ∈ {1, 2}, k ∈ [1, . . . ,K]
18: if time to evaluate then 19: for specified number of evaluation runs do 20: Reset the environment and observe the state s. 21: Execute policy a = 1K ∑K i=1M tanh (µθi(s)) until the terminal state. 22: Record and log the return. 23: until convergence
Emphasizing Recent Experience We implement the Emphasizing Recent Experience (ERE) mechanism from Wang et al. (2020). ERE samples non-uniformly from the most recent experiences stored in the replay buffer. Let B be the number of mini-batch updates and |D| be the size of the replay buffer. When performing the gradient updates, we sample from the most recent cb data points stored in the replay buffer, where cb = |D| · ηb 1000 B for b = 1, . . . , B.
The hyper-parameter η starts off with a set value of η0 and is later adapted based on the improvements in the agent training performance. Let Irecent be the improvement in terms of training episode returns made over the last |D|/2 time-steps and Imax be the maximum of such improvements over the course of the training. We adapt η according to the formula:
η = η0 · Irecent Imax + 1− Irecent Imax
Our implementation uses the exponentially weighted moving average to store the value of Irecent. More concretely, we define Irecent based on two additional parameters Rrecent and Rprev so that Irecent = Rrecent −Rprev . Those parameters are then updated whenever we receive a new training episode return ep_ret:
Rrecent = λrecent · ep_ret+ (1− λrecent) ·Rrecent Rprev = λprev · ep_ret+ (1− λprev) ·Rprev
where λprev = T/b |D|2 c, λrecent = 10 · λprev and T is the maximum length of an episode.
Hardware During the training of our models, we employ only CPUs using a cluster where each node has 28 available cores of 2.6 GHz, alongside at least 64 GB of memory. The running time of a typical experiment did not exceed 24 hours. | 1. What is the focus of the paper in terms of off-policy RL algorithms?
2. What are the strengths of the proposed method called ED2?
3. What are the weaknesses and questions raised regarding the results and their interpretation?
4. How does the reviewer assess the clarity and relevance of the paper's content?
5. Are there any minor issues or suggestions for improvement in the review? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents an empirical study between different commonly-used tricks and elements of off-policy RL algorithms and tries to understand the interplay between those elements. The authors propose a new method called ED2 that utilizes their insights from the empirical study.
Review
strengths
The paper is also well-written and easy to understand
I believe the paper studies a problem that's important and significant in the rl community and very relevant to the venue
weaknesses/questions In general, I think the results should be treated more carefully and some theoretical motivation is needed. Following are some of my questions. 1.in section 4.1, the conclusion of "This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training." seems rather strong judging from fig.2 (I would say they are roughly the same). 2. "Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.", wouldn't it make better sense if you compare against the ones that are also ensemble, e.g. ensemble SOP instead of SOP? How do you know what brings the performance, is it the ensemble or the determinist action, or both? 3. fig.4 again, why not compare the ensemble baselines? I think it's important to ablate the design choices that are actually important
Some minor problems.
maybe more introduction for SOP (e.g. what's the ere replay buffer?) 2.sec.3, "These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).", I do not understand why is the exploration "coherent and temporally-extended". 3.in sec.3, I guess the used and not used should be comparable and matched, but why is action normalization compared with "observations and rewards normalization"? 4.fig.5, what does it mean for the Humanoid velocities? could you elaborate? |
ICLR | Title
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Abstract
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics’ initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors’ initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
1 INTRODUCTION
Recently, deep reinforcement learning (RL) has achieved multiple breakthroughs in a range of challenging domains (e.g. Silver et al. (2016); Berner et al. (2019); Andrychowicz et al. (2020b); Vinyals et al. (2019)). A part of this success is related to an ever-growing toolbox of tricks and methods that were observed to boost the RL algorithms’ performance (e.g. Hessel et al. (2018); Haarnoja et al. (2018b); Fujimoto et al. (2018); Wang et al. (2020); Osband et al. (2019)). This state of affairs benefits the field but also brings challenges related to often unclear interactions between the individual improvements and the credit assignment related to the overall performance of the algorithm Andrychowicz et al. (2020a); Ilyas et al. (2020).
In this paper, we present a comprehensive empirical study of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. These are presented in Section 4 and Appendix B. Our insights include:
• The normally distributed action noise, commonly used for exploration, hinders training. • The current state-of-the-art methods are unstable under several stability criteria. • The critics’ initialization plays a major role in ensemble-based actor-critic exploration, while
the training is mostly invariant to the actors’ initialization. • The approximated posterior sampling exploration (Osband et al., 2013) outperforms approx-
imated UCB exploration combined with weighted Bellman backup (Lee et al., 2020). • The weighted Bellman backup (Lee et al., 2020) can not replace the clipped double Q-
Learning (Fujimoto et al., 2018). To address some of the issues listed above, we introduce the Ensemble Deep Deterministic Policy Gradient (ED2) algorithm1, see Section 3. ED2 brings together existing RL tools in a novel way: it is
1Our code is based on SpinningUp (Achiam, 2018). We open-source it at: https://github.com/ ed2-paper/ED2.
an off-policy algorithm for continuous control, which constructs an ensemble of streamlined versions of TD3 agents and achieves the state-of-the-art performance in OpenAI Gym MuJoCo, substantially improving the results on the two hardest tasks – Ant and Humanoid. Consequently, ED2 does not require knowledge outside of the existing RL toolbox, is conceptually straightforward, and easy to code.
2 BACKGROUND
We model the environment as a Markov Decision Process (MDP). It is defined by the tuple (S,A, R, P, γ, p0), where S is a continuous multi-dimensional state space, A denotes a continuous multi-dimensional action space, P is a transition kernel, γ ∈ [0, 1) stands for a discount factor, p0 refers to an initial state distribution, and R is a reward function. The agent learns a policy from sequences of transitions τ = [(st, at, rt, st+1, d)]Tt=0, called episodes or trajectories, where at ∼ π(·|st), st+1 ∼ P (·|st, at), rt = R(st, at, st+1), d is a terminal signal, and T is the terminal time-step. A stochastic policy π(a|s) maps each state to a distribution over actions. A deterministic policy µ : S −→ A assigns each state an action. All algorithms that we consider in this paper use a different policy for collecting data (exploration) and a different policy for evaluation (exploitation). In order to keep track of the progress, the evaluation runs are performed every ten thousand environment interactions. Because of the environments’ stochasticity, we run the evaluation policy multiple times. Let {Ri}Ni=1 be a set of (undiscounted) returns from N evaluation episodes {τi}Ni=1, i.e. Ri = ∑ rt∈τi rt. We evaluate the
policy using the average test return R̄ = 1N ∑N i=1Ri and the standard deviation of the test returns
σ = √
1 N−1 ∑N i=1(Ri − R̄)2.
We run experiments on four continuous control tasks and their variants, introduced in the appropriate sections, from the OpenAI Gym MuJoCo suite (Brockman et al., 2016) presented in Figure 1. The agent observes vectors that describe the kinematic properties of the robot and its actions specify torques to be applied on the robot joints. See Appendix D for the details on the experimental setup.
3 ENSEMBLE DEEP DETERMINISTIC POLICY GRADIENTS
For completeness of exposition, we present ED2 before the experimental section. The ED2 architecture is based on an ensemble of Streamlined Off-Policy (SOP) agents (Wang et al., 2020), meaning that our agent is an ensemble of TD3-like agents (Fujimoto et al., 2018) with the action normalization and the ERE replay buffer. The pseudo-code listing can be found in Algorithm 1, while the implementation details, including a more verbose version of pseudo-code (Algorithm 3), can be found in Appendix E. In the data collection phase (Lines 1-9), ED2 selects one actor from the ensemble uniformly at random (Lines 1 and 9) and run its deterministic policy for the course of one episode (Line 4). In the evaluation phase (not shown in Algorithm 1), the evaluation policy averages all the actors’ output actions. We train the ensemble every 50 environment steps with 50 stochastic gradient descent updates (Lines 10-13). ED2 concurrently learns K · 2 Q-functions, Qφk,1 and Qφk,2 where k ∈ K, by mean square Bellman error minimization, in almost the same way that SOP learns its two Q-functions. The only difference is that we have K critic pairs that are initialized with different random weights and then trained independently with the same batches of data. Because of the different initial weights, each Q-function has a different bias in its Q-values. The K actors, πθk , train maximizing their corresponding first critic, Qφk,1 , just like SOP.
Algorithm 1 ED2 - Ensemble Deep Deterministic Policy Gradients Input: init. params for policy θk and Q-functions φk,1, φk,2, k ∈ [1...K]; replay buffer D;
1: Sample the current policy index c ∼ U([1...K]). 2: Reset the environment and observe the state s. 3: repeat 4: Execute action a = µθc(s) . µ uses the action normalization 5: Observe and store (s, a, r, s′, d) in the replay buffer D. 6: Set s← s′ 7: if episode is finished then 8: Reset the environment and observe initial state s. 9: Sample the current policy index c ∼ U([1...K]).
10: if time to update then 11: for as many as steps done in the environment do 12: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 13: Update the parameters θk, φk,1 and φk,2 by one gradient step. 14: until convergence
Utilizing the ensembles requires several design choices, which we summarize below. The ablation study of ED2 elements is provided in Appendix C.
Ensemble
Used: We train the ensemble of 5 actors and 5 critics; each actor learns from its own critic and the whole ensemble is trained on the same data.
Not used: We considered different actor-critic configurations, initialization schemes and relations, as well as the use of random prior networks (Osband et al., 2018), data bootstrap (Osband et al., 2016), and different ensemble sizes. We also change the SOP network sizes and training intensity instead of using the ensemble. Besides the prior networks in some special cases, these turn out to be inferior as shown in Section 4 and Appendix B.1.
Exploration
Used: We pick one actor uniformly at random to collect the data for the course of one episode. The actor is deterministic (no additive action noise is applied). These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).
Not used: We tested several approaches to exploration: using the ensemble of actors, UCB (Lee et al., 2020), and adding the action noise in different proportions. These experiments are presented in Appendix B.2.
Exploitation
Used: The evaluation policy averages all the actors’ output actions to provide stable performance.
Not used: We tried picking an action with the biggest value estimate (average of the critics’ Qfunctions) in evaluation (Huang et al., 2017).
Interestingly, both policies had similar results, see Appendix B.3.
Action normalization
Used: We use the action normalization introduced by Wang et al. (2020).
Not used: We experimented with the observations and rewards normalization, which turned out to be unnecessary. The experiments are presented in Appendix B.4.
Q-function updates
Used: We do 50 SGD updates (ADAM optimizer (Kingma and Ba, 2015), MSE loss) to the actors and the critics every 50 environment interactions, use Clipped Double Q-Learning (Fujimoto et al., 2018).
Not used: We also examined doing the updates at the end of each episode (with the proportional number of updates), using the Hubert loss, and doing weighted Bellman backups (Lee et al., 2020). However, we found them to bring no improvement to our method, as presented in Appendix B.5.
4 EXPERIMENTS
In this section, we present our comprehensive study and the resulting insights. The rest of the experiments verifying that our design choices perform better than alternatives are in Appendix B. Unless stated otherwise, a solid line in the figures represents an average, while a shaded region shows a 95% bootstrap confidence interval. We used 30 seeds for ED2 and the baselines and 7 seeds for the ED2 variants.
4.1 THE NORMALLY DISTRIBUTED ACTION NOISE, COMMONLY USED FOR EXPLORATION, HINDERS TRAINING
In this experiment, we deprive SOP of its exploration mechanism, namely additive normal action noise, and call this variant deterministic SOP (det. SOP). It causes relatively minor deterioration in the Humanoid performance, has no significant influence on the Hopper or Walker performance, and substantially improves the Ant performance, see Figure 2. This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training.
ED2 leverages this insight and constructs an ensemble of deterministic SOP agents presented in Section 3. Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.
ED2 achieves state-of-the-art performance on the OpenAI Gym MuJoCo suite. Figure 4 shows the results of ED2 contrasted with three strong baselines: SUNRISE (Lee et al., 2020), SOP (Wang et al., 2020), and SAC(Haarnoja et al., 2018b).
For completeness, we plot the Humanoid velocities in Figure 5 which shows that our method accelerates to a much higher velocity than the baselines.
4.2 THE CURRENT STATE-OF-THE-ART METHODS ARE UNSTABLE UNDER SEVERAL STABILITY CRITERIA
We consider three notions of stability: inference stability, asymptotic performance stability, and training stability. ED2 outperforms baselines in each of these notions, as discussed below. Similar metrics were also studied in Chan et al. (2020).
Inference stability We say that an agent is inference stable if, when run multiple times, it achieves similar test performance every time. We measure inference stability using the standard deviation of test returns explained in Section 2. We found that that the existing methods train policies that are surprisingly sensitive to the randomness in the environment initial conditions2. Figure 4 and Figure 6 show that ED2 successfully mitigates this problem. By the end of the training, ED2 produces results within 1% of the average performance on Humanoid, while the performance of SUNRISE, SOP, and SAC may vary as much as 11%.
2The MuJoCo suite is overall deterministic, nevertheless, little stochasticity is injected at the beginning of each trajectory, see Appendix D for details.
Asymptotic performance stability We say that an agent achieves asymptotic performance stability if it achieves similar test performance across multiple training runs starting from different initial networks weights. Figure 7 shows that ED2 has a significantly smaller variance than the other methods while maintaining high performance.
Training stability We will consider training stable if performance does not severely deteriorate from one evaluation to the next. We define the root mean squared deterioration metric (RMSD) as follows:
RMSD = √√√√ 1 M M∑ i=1 ( max(R̄i−20 − R̄i, 0) )2 ,
where M is the number of the evaluation phases during training and R̄i is the average test return at the i-th evaluation phase (described in Section 2). We compare returns 20 evaluation phases apart to ensure that the deterioration in performance doesn’t stem from the evaluation variance. ED2 has the lowest RMSD across all tasks, see Figure 8.
4.3 THE CRITICS’ INITIALIZATION PLAYS A MAJOR ROLE IN ENSEMBLE-BASED ACTOR-CRITIC EXPLORATION, WHILE THE TRAINING IS MOSTLY INVARIANT TO THE
ACTORS’ INITIALIZATION
In this experiment, actors’ weights are initialized with the same random values (contrary to the standard case of different initialization). Moreover, we test a corresponding case with critics’ weights initialized with the same random values or simply training only a single critic.
Figure 9 indicates that the choice of actors initialization does not matter in all tasks but Humanoid. Although the average performance on Humanoid seems to be better, it is also less stable. This is quite interesting because the actors are deterministic. Therefore, the exploration must come from the fact that each actor is trained to optimize his own critic.
On the other, Figure 9 shows that the setup with the single critic severely impedes the agent performance. We suspect that using the single critic impairs the agent exploration capabilities as its actors’ policies, trained to maximize the same critic’s Q-function, become very similar.
4.4 THE APPROXIMATED POSTERIOR SAMPLING EXPLORATION OUTPERFORMS APPROXIMATED UCB EXPLORATION COMBINED WITH WEIGHTED BELLMAN BACKUP
ED2 uses posterior sampling based exploration method (Osband et al., 2016). SUNRISE, on the other hand, approximates the Upper Confidence Bound (UCB) exploration technique and does weighted Bellman backups (Lee et al., 2020). For the fair comparison between ED2 and SUNRISE, we substitute the SUNRISE base algorithm SAC for the SOP algorithm used by ED2. We call this variant SUNRISE-SOP.
We test both methods on the standard MuJoCo benchmarks as well as delayed (Zheng et al., 2018a) and sparse (Plappert et al., 2018) rewards variants. Both variations make the environments harder from the exploration standpoint. In the delayed version, the rewards are accumulated and returned to the agent only every 10 time-steps. In the sparse version, the reward for the forward motion is returned to the agent only after it crosses the threshold of one unit on the x-axis. For a better perspective, a fully trained Humanoid is able to move to around five units until the end of the episode. All the other reward components (living reward, control cost, and contact cost) remain unchanged. The results are presented in Table 1.
ED2 outperforms the non-ensemble method SOP, supporting the argument of coherent and temporallyextended exploration of ED2. Moreover, we observe that performance in MuJoCo environments benefits from the ED2 approximate Bayesian posterior sampling exploration (Osband et al., 2013) in contrast to the approximated UCB in SUNRISE, which follows the OFU principle. The posterior sampling is proved to be theoretically superior to the OFU strategy (Osband and Van Roy, 2017).
The experiment where the ED2’s exploration mechanism is replaced for UCB is in Appendix B.2. This variant also achieves worse results than ED2. The additional exploration efficiency experiment in the custom Humanoid environment, where an agent has to find and reach a goal position, is in Appendix A.
4.5 THE WEIGHTED BELLMAN BACKUP CAN NOT REPLACE THE CLIPPED DOUBLE Q-LEARNING
We applied the weighted Bellman backups proposed by Lee et al. (2020) to our method. It is suggested that the method mitigates error propagation in Q-learning by re-weighting the Bellman backup based on uncertainty estimates from an ensemble of target Q-functions (i.e. variance of predictions). Interestingly, Figure 10 does not show this positive effect on ED2.
Our method uses clipped double Q-Learning to mitigate overestimation in Q-functions (Fujimoto et al., 2018). We wanted to check if it is required and if it can be exchanged for the weighted Bellman backups used by Lee et al. (2020). Figure 11 shows that clipped double Q-Learning is required and that the weighted Bellman backups can not replace it.
5 RELATED WORK
Off-policy RL Recently, multiple deep RL algorithms for continuous control have been proposed, e.g. DDPG (Lillicrap et al., 2016), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018b), SOP (Wang et al., 2020), SUNRISE (Lee et al., 2020). They provide a variety of methods for improving training quality, including double-Q bias reduction van Hasselt et al. (2016), target policy smoothing or different update frequencies for actor and critic Fujimoto et al. (2018), entropy regularization Haarnoja et al. (2018b), action normalization Wang et al. (2020), prioritized experience replay Wang et al. (2020), weighted Bellman backups Kumar et al. (2020); Lee et al. (2020), and use of ensembles Osband et al. (2019); Lee et al. (2020); Kurutach et al. (2018); Chua et al. (2018).
Ensembles Deep ensembles are a practical approximation of a Bayesian posterior, offering improved accuracy and uncertainty estimation Lakshminarayanan et al. (2017); Fort et al. (2019). They
inspired a variety of methods in deep RL. They are often used for temporally-extended exploration; see the next paragraph. Other than that, ensembles of different TD-learning algorithms were used to calculate better Q-learning targets (Chen et al., 2018). Others proposed to combine the actions and value functions of different RL algorithms Wiering and van Hasselt (2008) or the same algorithm with different hyper-parameters Huang et al. (2017). For mixing the ensemble components, complex self-adaptive confidence mechanisms were proposed in Zheng et al. (2018b). Our method is simpler: it uses the same algorithm with the same hyper-parameters without any complex or learnt mixing mechanism. Lee et al. (2020) proposed a unified framework for ensemble learning in deep RL (SUNRISE) which uses bootstrap with random initialization Osband et al. (2016) similarly to our work. We achieve better results than SUNRISE and show in Appendix B that their UCB exploration and weighted Bellman backups do not aid our algorithm performance.
Exploration Various frameworks have been developed to balance exploration and exploitation in RL. The optimism in the face of uncertainty principle Lai and Robbins (1985); Bellemare et al. (2016) assigns an overly optimistic value to each state-action pair, usually in the form of an exploration bonus reward, to promote visiting unseen areas of the environment. The maximum entropy method Haarnoja et al. (2018a) encourages the policy to be stochastic, hence boosting exploration. In the parameter space approach Plappert et al. (2018); Fortunato et al. (2018), noise is added to the network weights, which can lead to temporally-extended exploration and a richer set of behaviours. Posterior sampling Strens (2000); Osband et al. (2016; 2018) methods have similar motivations. They stem from the Bayesian perspective and rely on selecting the maximizing action among sampled and statistically plausible set of action values. The ensemble approach Lowrey et al. (2018); Miłoś et al. (2019); Lee et al. (2020) trains multiple versions of the agent, which yields a diverse set of behaviours and can be viewed as an instance of posterior sampling RL.
6 CONCLUSIONS
We conduct a comprehensive empirical analysis of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. We believe that the findings can be useful to RL researchers. Additionally, we propose Ensemble Deep Deterministic Policy Gradients (ED2), an ensemble-based off-policy RL algorithm, which achieves state-of-the-art performance and addresses several issues found during the aforementioned study.
7 REPRODUCIBILITY STATEMENT
We have made a significant effort to make our results reproducible. We use 30 random seeds, which is above the currently popular choice in the field (up to 5 seeds). Furthermore, we systematically explain our design choices in Section 3 and we provide a detailed pseudo-code of our method in Algorithm 3 in the Appendix B. Additionally, we open-sourced the code for the project3 together with examples of how to reproduce the main experiments. The implementation details are explained in Appendix E and extensive information about the experimental setup is given in Appendix D.
3https://github.com/ed2-paper/ED2
A EXPLORATION EFFICIENCY IN THE CUSTOM HUMANOID ENVIRONMENT
To check the exploration capabilities of our method, we constructed two environments based on Humanoid where the goal is not only to move forward as fast as possible but to find and get to the specific region. The environments are described in Figure 12.
Because the Humanoid initial state is slightly perturbed every run, we compare solved rates over multiple runs, see details in Appendix D. Figure 13 compares the solved rates of our method and the three baselines. Our method outperforms the baselines. For this experiment, our method uses the prior networks (Osband et al., 2018).
B DESIGN CHOICES
In this section, we summarize the empirical evaluation of various design choices grouped by topics related to an ensemble of agents (B.1), exploration (B.2), exploitation (B.3), normalization (B.4), and Q-function updates (B.5). In the plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in case of ED2 (ours) and 7 seeds otherwise. All of these experiments test ED2 presented in Section 3 with Algorithm 2 used for evaluation (the ensemble critic variant). We call Algorithm 2 a ’vote policy’.
Algorithm 2 Vote policy 1: Input: ensemble size K; policy θk and Q-function φk,1 parameters where k ∈ [1, . . . ,K]; max
action scale M ; 2: function VOTE_POLICY(s, c)
ak = M tanh (µθk(s)) for k ∈ [1, . . . ,K] (1)
3: if use arbitrary critic then
qk = Qφc,1(s, ak) for k ∈ [1, . . . ,K] (2)
4: else use ensemble critic
qk = 1
K ∑ i∈[1...K] Qφi,1(s, ak) for k ∈ [1, . . . ,K] (3)
5: return ak for k = argmaxk qk
B.1 ENSEMBLE
Prior networks We tested if our algorithm can benefit from prior networks (Osband et al., 2018). It turned out that the results are very similar on OpenAI Gym MuJoCo tasks, see Figure 14. However, the prior networks are useful on our crafted hard-exploration Humanoid environments, see Figure 15.
Ensemble size Figure 16 shows ED2 with different ensemble sizes. As can be seen, the ensemble of size 5 (which we use in ED2) achieves good results, striking a balance between performance and computational overhead.
Data bootstrap Osband et al. (2016) and Lee et al. (2020) remark that training an ensemble of agents using the same training data but with different initialization achieves, in most cases, better performance than applying different training samples to each agent. We confirm this observation in Figure 17. Data bootstrap assigned each transition to each agent in the ensemble with 50% probability.
SOP bigger networks and training intensity We checked if simply training SOP with bigger networks or with higher training intensity (a number of updates made for each collected transition) can get it close to the ED2 results. Figure 18 compares ED2 to SOP with different network sizes, while Figure 19 compares ED2 to SOP with one or five updates per environment step. It turns out that bigger networks or higher training intensity does not improve SOP performance.
B.2 EXPLORATION
Vote policy In this experiment, we used the so-called "vote policy" described in Algorithm 2. We use it for action selection in step 5 of Algorithm 3 in two variations: (1) where the random critic, chosen for the duration of one episode, evaluates each actor’s action or (2) with the full ensemble of critics for actors actions evaluation. Figure 20 shows that the arbitrary critic is not much different from our method. However, in the case of the ensemble critic, we observe a significant performance drop suggesting deficient exploration.
UCB We tested the UCB exploration method from Lee et al. (2020). This method defines an upper-confidence bound (UCB) based on the mean and variance of Q-functions in an ensemble and selects actions with the highest UCB for efficient exploration. Figure 21 shows that the UCB exploration method makes the results of our algorithm worse.
Gaussian noise While our method uses ensemble-based temporally coherent exploration, the most popular choice of exploration is injecting i.i.d. noise (Fujimoto et al., 2018; Wang et al., 2020). We evaluate if these two approaches can be used together. We used Gaussian noise with the standard deviation of 0.29, it is the default value in Wang et al. (2020). We found that the effects are taskspecific, barely visible for Hopper and Walker, positive in the case of Humanoid, and negative for Ant – see Figure 22. In a more refined experiment, we varied the noise level. With more noise the Humanoid results are better, whereas the And results are worse – see Figure 23.
B.3 EXPLOITATION
We used the vote policy, see Algorithm 2, as the evaluation policy in step 21 of Algorithm 3. Figure 24 shows that the vote policy does worse on the OpenAI Gym MuJoCo tasks. However, on our custom Humanoid tasks introduced in Section 4, it improves our agent performance – see Figure 25.
B.4 NORMALIZATION
We validated if rewards or observations normalization (Andrychowicz et al., 2020a) help our method. In both cases, we keep the empirical mean and standard deviation of each reward/observation coordinate, based on all rewards/observations seen so far, and normalize rewards/observations by subtracting the empirical mean and dividing by the standard deviation. It turned out that only the observations normalization significantly helps the agent on Humanoid, see Figures 26 and 27. The action normalization influence is tested in Appendix C.
B.5 Q-FUNCTION UPDATES
Huber loss We tried using the Huber loss for the Q-function training. It makes the results on all tasks worse, see Figure 28.
C ABLATION STUDY
In this section, we ablate the ED2 components to see their impact on performance and stability. We start with the ensemble exploration and exploitation and then move on to the action normalization and the ERE replay buffer. In all plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in all but action normalization and ERE replay buffer experiments, where we run 7 seeds.
Exploration & Exploitation In the first experiment we wanted to isolate the effect of ensemblebased temporally coherent exploration on the performance and stability of ED2. Figures 29-32 compare the performance and stability of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for evaluation in step 21 of Algorithm 3. It is worth noting that the action selection during the data collection, step 5 in Algorithm 3, is left unchanged – the ensemble of actors is used for exploration and each actor is trained on all the data. This should isolate the effect of exploration on
the test performance of every actor. The results show that the performance improvement and stability of ED2 does not come solely from the efficient exploration. ED2 ablation performs comparably to the baseline and is even less stable.
In the next experiment, we wanted to check if the ensemble evaluation is all we need in that event. Figure 33 compares the performance of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for the data collection in step 5 of Algorithm 3. The action selection during the evaluation, step 21 in Algorithm 3, is left unchanged – the ensemble of actors is trained on the data collected only by one of the actors. We add Gaussian noise to the single actor’s actions for exploration as described in Appendix B.2. The results show that the ensemble actor test performance collapses, possibly because of training on the out of distribution data. This implies that the ensemble of actors, used for evaluation, improves the test performance and stability. However, it is required that the same ensemble of actors is also used for exploration, during the data collection.
Action normalization The implementation details of the action normalization are described in Appendix E. Figure 34 shows that the action normalization is especially required on the Ant and Humanoid environments, while not disrupting the training on the other tasks.
ERE replay buffer The implementation details of the ERE replay buffer are described in Appendix E. In Figure 35 we observe that it improves the final performance of ED2 on all tasks, especially on Walker2d and Humanoid.
D EXPERIMENTAL SETUP
Plots In all evaluations, we used 30 evaluation episodes to better access the average performance of each policy, as described in Section 2. For a more pleasant look and easier visual assessment, we smoothed the lines using an exponential moving average with a smoothing factor equal 0.4.
OpenAI Gym MuJoCo In MuJoCo environments, that we used, a state is defined by (x, y, z) position and velocity of the robot’s root, and angular position and velocity of each of its joints. The observation holds almost all information from the state except the x and y position of the robot’s root. The action is a torque that should be applied to each joint of the robot. Sizes of those spaces for each environment are summarised in Table 2.
MuJoCo is a deterministic physics engine thus all simulations conducted inside it are deterministic. This includes simulations of our environments. However, to simplify the process of data gathering and to counteract over-fitting the authors of OpenAI Gym decided to introduce some stochasticity. Each episode starts from a slightly different state - initial positions and velocities are perturbed with random noise (uniform or normal depending on the particular environment).
E IMPLEMENTATION DETAILS
Architecture and hyper-parameters In our experiments, we use deep neural networks with two hidden layers, each of them with 256 units. All of the networks use ReLU as an activation, except on the final output layer, where the activation used varies depending on the model: critic networks use no activation, while actor networks use tanh() multiplied by the max action scale. Table 3 shows the hyper-parameters used for the tested algorithms.
Action normalization Our algorithm employs action normalization proposed by Wang et al. (2020). It means that before applying the squashing function (e.g. tanh()), the outputs of each actor network are normalized in the following way: let µ = (µ1, . . . , µA) be the output of the actor’s network and let G = ∑A i=1 |µi|/A be the average magnitude of this output, where A is the action’s dimensionality. If G > 1 then we normalize the output by setting µi to µi/G for all i = 1, . . . , A. Otherwise, we leave the output unchanged. Each actor’s outputs are normalized independently from other actors in the ensemble.
Algorithm 3 ED2 - Ensemble Deep Deterministic Policy Gradients Input: ensemble sizeK; init. policy θk andQ-functions φk,1, φk,2 param. where k ∈ [1, . . . ,K]; replay buffer D; max action scale M ; target smoothing std. dev. σ; interpolation factor ρ;
1: Set the target parameters φ̄k,1 ← φk,1, φ̄k,2 ← φk,2 2: Sample the current policy index c ∼ U([1, . . . ,K]). 3: Reset the environment and observe the state s. 4: repeat 5: Execute action a = M tanh (µθc(s)) . µ uses the action normalization 6: Observe and store (s, a, r, s′, d) in the replay buffer D. 7: Set s← s′ 8: if episode is finished then 9: Reset the environment and observe initial state s. 10: Sample the current policy index c ∼ U([1, . . . ,K]). 11: if time to update then 12: for as many as steps done in the environment do 13: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 14: Compute targets
yk(r, s ′, d) = r + γ(1− d) min
i=1,2 Qφ̄k,i(s
′, a′k)
a′k = M tanh (µθk(s ′) + ) , ∼ N (0, σ)
15: Update the Q-functions by one step of gradient descent using
∇φk,i 1 |B| ·K ∑
(s,a,r,s′,d)∈B
( Qφk,i(s, a)− yk(r, s′, d) )2 for i ∈ {1, 2}, k ∈ [1, . . . ,K]
16: Update the policies by one step of gradient ascent using
∇θk 1 |B| ·K ∑ s∈B Qφk,1(s, µθk(s)) for k ∈ [1, . . . ,K]
17: Update target parameters with
φ̄k,i ← ρφ̄k,i + (1− ρ)φk,i for i ∈ {1, 2}, k ∈ [1, . . . ,K]
18: if time to evaluate then 19: for specified number of evaluation runs do 20: Reset the environment and observe the state s. 21: Execute policy a = 1K ∑K i=1M tanh (µθi(s)) until the terminal state. 22: Record and log the return. 23: until convergence
Emphasizing Recent Experience We implement the Emphasizing Recent Experience (ERE) mechanism from Wang et al. (2020). ERE samples non-uniformly from the most recent experiences stored in the replay buffer. Let B be the number of mini-batch updates and |D| be the size of the replay buffer. When performing the gradient updates, we sample from the most recent cb data points stored in the replay buffer, where cb = |D| · ηb 1000 B for b = 1, . . . , B.
The hyper-parameter η starts off with a set value of η0 and is later adapted based on the improvements in the agent training performance. Let Irecent be the improvement in terms of training episode returns made over the last |D|/2 time-steps and Imax be the maximum of such improvements over the course of the training. We adapt η according to the formula:
η = η0 · Irecent Imax + 1− Irecent Imax
Our implementation uses the exponentially weighted moving average to store the value of Irecent. More concretely, we define Irecent based on two additional parameters Rrecent and Rprev so that Irecent = Rrecent −Rprev . Those parameters are then updated whenever we receive a new training episode return ep_ret:
Rrecent = λrecent · ep_ret+ (1− λrecent) ·Rrecent Rprev = λprev · ep_ret+ (1− λprev) ·Rprev
where λprev = T/b |D|2 c, λrecent = 10 · λprev and T is the maximum length of an episode.
Hardware During the training of our models, we employ only CPUs using a cluster where each node has 28 available cores of 2.6 GHz, alongside at least 64 GB of memory. The running time of a typical experiment did not exceed 24 hours. | 1. What is the main contribution of the paper in the field of reinforcement learning?
2. What are the strengths and weaknesses of the proposed Ensemble Deep Deterministic Policy Gradients (ED2) algorithm?
3. How does the reviewer assess the empirical results presented in the paper, particularly in terms of stability and performance?
4. Are there any concerns regarding the comparison of ED2 with other algorithms in the paper?
5. What are some suggestions for improving the paper, such as providing more theoretical understanding or expanding the scope of environments? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a deep reinforcement algorithm, Ensemble Deep Deterministic Policy Gradients (ED2), for continuous control tasks. The algorithm is empirically derived and is claimed to represent SotA performance on several tasks and while providing more stable results. These claims are justified based primarily on the (reward and stability) results on 4 MuJoCo environments.
Review
Pros:
The code is open-sourced, which is excellent for reproducibility
There is significant focus on the stability of RL algorithms, which is great to see. Often RL papers just present the reward curves and information on the stability of these algorithms is important.
Using 30 random seeds (instead of the usual 3-6) is great and strengthens any empirical claims
Cons:
The empirical results don’t seem to justify the algorithm. Figure 4 represents the main empirical argument for ED2 but the empirical gains shown are minor. On Walker it appears as though SUNRISE actually outperforms ED2. Humanoid and Hopper both have ED2 on top, but only by a very slim margin (one that has overlapping confidence intervals with other algorithms).
Small number of environments. The algorithms are only compared on 4 environments (Ant, Hopper, Humanoid, Walker) when MuJoCo/OpenAI Gym offers many more (e.g. HalfCheetah, InvertedDoublePendulum, Swimmer, Reacher, etc.). The empirical results could be more compelling if the scope was increased. If you expanded the results presented in Table 1 for all algorithms, this could be a more compelling argument.
Much of the paper is dedicated to information that would be better in the appendix. Information about non-essential ablation studies (e.g. Figure 2 shows the impacts are pretty minor) are not necessary in the main body. These figures could be replaced with more directly relevant information, like to aid claims of stability, e.g. by injecting varying degrees of noise/randomness into the environment and evaluating relative performances, or evaluating sensitivity to other hyperparameters (network structure, learning rates, etc.)
Lack of theoretical understanding. I am well aware of the problems of equation packing and the disconnect that often occurs in RL papers between the algorithm and the theoretical justification, but at least providing some insight into comparisons of UCB, posterior sampling, priors, etc. would be nice.
The empirical comparisons don’t seem to offer an even playing field for all algorithms. Based on (Haarnoja et al., 2019), SAC should reach ~8000 on Humanoid-v2 (after 10e6 steps). However, the experiments are only conducted till 3e6 steps with multiple algorithms appearing to still have upward trajectories. There seems to be no mention of a focus on the speed of learning/early learning performance vs final performance.
The relationship between Figure 4 and Figure 6 is not clear. Figure 6 seems to indicate that (e.g.) pretty much all algorithms have >1,000 STD on Humanoid and Ant early on in training. However, the bootstrapped CI in Figure 4 is substantially smaller. This discrepancy is uncommon and should be explained.
Reinforcement learning is notoriously brittle and I would encourage references to the previous work done evaluating the effect of random seeds/initialization on agent performance e.g. the famous (Henderson et al., 2019).
Figure 5 does not provide any support or meaningful insight. The velocity of humanoid is not what the algorithm is optimizing. It is included for “completeness”, but this seems to be a cherry-picked statistic that doesn’t convey anything meaningful (while the speed is incorporated in the reward function, since the rewards are much closer than the velocities we don’t see what the tradeoff is).
Figure 1 does not really help clarify anything. If one already knows the environments, then the figure is unnecessary, if one is unfamiliar with them, the figure doesn’t show what actually transpires in the environment and doesn’t clear anything up.
To claim ED2 is really SotA, further analysis is necessary across environments like Agarwal et al. 2021, Barreto et al. 2010, Jordan et al. 2020, etc.
There are minor typographical inconsistencies, e.g. differing usages of \cite{} and \citep{}
Inconsistent background information. The paper provides a definition of standard deviation, an extremely common statistical measurement, but not for much more niche terms such as approximated UCB. To be clear, I have no problem with giving the formula for STD, but not giving definitions for much less widely known terms is something that could be fixed.
Misc/Note: This paper seems like it’s trying to do two things at once: (1) provide a review of RL techniques (e.g. exploitation techniques), evaluate their impact and report on the key takeaways of this empirical analysis, and (2) introduce a novel RL algorithm and justify it’s empirical construction and performance. Both of these are papers that are perfectly fine, but by trying to do both it leaves something lacking from each of them. If this was a review of techniques, I would like to see more continuous control algorithms evaluated and more techniques experimented with. If this is just introducing a novel algorithm, I would like to see more theoretical explanations of the techniques (e.g. theoretical derivations and insights into the effect of K) and more extensive ablation studies (e.g. evaluating on more MuJoCo tasks or other continuous control environments that have different properties such as RLBench, Industrial control benchmark, assistive gym, DM Control suite, etc.). |
ICLR | Title
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Abstract
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics’ initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors’ initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
1 INTRODUCTION
Recently, deep reinforcement learning (RL) has achieved multiple breakthroughs in a range of challenging domains (e.g. Silver et al. (2016); Berner et al. (2019); Andrychowicz et al. (2020b); Vinyals et al. (2019)). A part of this success is related to an ever-growing toolbox of tricks and methods that were observed to boost the RL algorithms’ performance (e.g. Hessel et al. (2018); Haarnoja et al. (2018b); Fujimoto et al. (2018); Wang et al. (2020); Osband et al. (2019)). This state of affairs benefits the field but also brings challenges related to often unclear interactions between the individual improvements and the credit assignment related to the overall performance of the algorithm Andrychowicz et al. (2020a); Ilyas et al. (2020).
In this paper, we present a comprehensive empirical study of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. These are presented in Section 4 and Appendix B. Our insights include:
• The normally distributed action noise, commonly used for exploration, hinders training. • The current state-of-the-art methods are unstable under several stability criteria. • The critics’ initialization plays a major role in ensemble-based actor-critic exploration, while
the training is mostly invariant to the actors’ initialization. • The approximated posterior sampling exploration (Osband et al., 2013) outperforms approx-
imated UCB exploration combined with weighted Bellman backup (Lee et al., 2020). • The weighted Bellman backup (Lee et al., 2020) can not replace the clipped double Q-
Learning (Fujimoto et al., 2018). To address some of the issues listed above, we introduce the Ensemble Deep Deterministic Policy Gradient (ED2) algorithm1, see Section 3. ED2 brings together existing RL tools in a novel way: it is
1Our code is based on SpinningUp (Achiam, 2018). We open-source it at: https://github.com/ ed2-paper/ED2.
an off-policy algorithm for continuous control, which constructs an ensemble of streamlined versions of TD3 agents and achieves the state-of-the-art performance in OpenAI Gym MuJoCo, substantially improving the results on the two hardest tasks – Ant and Humanoid. Consequently, ED2 does not require knowledge outside of the existing RL toolbox, is conceptually straightforward, and easy to code.
2 BACKGROUND
We model the environment as a Markov Decision Process (MDP). It is defined by the tuple (S,A, R, P, γ, p0), where S is a continuous multi-dimensional state space, A denotes a continuous multi-dimensional action space, P is a transition kernel, γ ∈ [0, 1) stands for a discount factor, p0 refers to an initial state distribution, and R is a reward function. The agent learns a policy from sequences of transitions τ = [(st, at, rt, st+1, d)]Tt=0, called episodes or trajectories, where at ∼ π(·|st), st+1 ∼ P (·|st, at), rt = R(st, at, st+1), d is a terminal signal, and T is the terminal time-step. A stochastic policy π(a|s) maps each state to a distribution over actions. A deterministic policy µ : S −→ A assigns each state an action. All algorithms that we consider in this paper use a different policy for collecting data (exploration) and a different policy for evaluation (exploitation). In order to keep track of the progress, the evaluation runs are performed every ten thousand environment interactions. Because of the environments’ stochasticity, we run the evaluation policy multiple times. Let {Ri}Ni=1 be a set of (undiscounted) returns from N evaluation episodes {τi}Ni=1, i.e. Ri = ∑ rt∈τi rt. We evaluate the
policy using the average test return R̄ = 1N ∑N i=1Ri and the standard deviation of the test returns
σ = √
1 N−1 ∑N i=1(Ri − R̄)2.
We run experiments on four continuous control tasks and their variants, introduced in the appropriate sections, from the OpenAI Gym MuJoCo suite (Brockman et al., 2016) presented in Figure 1. The agent observes vectors that describe the kinematic properties of the robot and its actions specify torques to be applied on the robot joints. See Appendix D for the details on the experimental setup.
3 ENSEMBLE DEEP DETERMINISTIC POLICY GRADIENTS
For completeness of exposition, we present ED2 before the experimental section. The ED2 architecture is based on an ensemble of Streamlined Off-Policy (SOP) agents (Wang et al., 2020), meaning that our agent is an ensemble of TD3-like agents (Fujimoto et al., 2018) with the action normalization and the ERE replay buffer. The pseudo-code listing can be found in Algorithm 1, while the implementation details, including a more verbose version of pseudo-code (Algorithm 3), can be found in Appendix E. In the data collection phase (Lines 1-9), ED2 selects one actor from the ensemble uniformly at random (Lines 1 and 9) and run its deterministic policy for the course of one episode (Line 4). In the evaluation phase (not shown in Algorithm 1), the evaluation policy averages all the actors’ output actions. We train the ensemble every 50 environment steps with 50 stochastic gradient descent updates (Lines 10-13). ED2 concurrently learns K · 2 Q-functions, Qφk,1 and Qφk,2 where k ∈ K, by mean square Bellman error minimization, in almost the same way that SOP learns its two Q-functions. The only difference is that we have K critic pairs that are initialized with different random weights and then trained independently with the same batches of data. Because of the different initial weights, each Q-function has a different bias in its Q-values. The K actors, πθk , train maximizing their corresponding first critic, Qφk,1 , just like SOP.
Algorithm 1 ED2 - Ensemble Deep Deterministic Policy Gradients Input: init. params for policy θk and Q-functions φk,1, φk,2, k ∈ [1...K]; replay buffer D;
1: Sample the current policy index c ∼ U([1...K]). 2: Reset the environment and observe the state s. 3: repeat 4: Execute action a = µθc(s) . µ uses the action normalization 5: Observe and store (s, a, r, s′, d) in the replay buffer D. 6: Set s← s′ 7: if episode is finished then 8: Reset the environment and observe initial state s. 9: Sample the current policy index c ∼ U([1...K]).
10: if time to update then 11: for as many as steps done in the environment do 12: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 13: Update the parameters θk, φk,1 and φk,2 by one gradient step. 14: until convergence
Utilizing the ensembles requires several design choices, which we summarize below. The ablation study of ED2 elements is provided in Appendix C.
Ensemble
Used: We train the ensemble of 5 actors and 5 critics; each actor learns from its own critic and the whole ensemble is trained on the same data.
Not used: We considered different actor-critic configurations, initialization schemes and relations, as well as the use of random prior networks (Osband et al., 2018), data bootstrap (Osband et al., 2016), and different ensemble sizes. We also change the SOP network sizes and training intensity instead of using the ensemble. Besides the prior networks in some special cases, these turn out to be inferior as shown in Section 4 and Appendix B.1.
Exploration
Used: We pick one actor uniformly at random to collect the data for the course of one episode. The actor is deterministic (no additive action noise is applied). These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).
Not used: We tested several approaches to exploration: using the ensemble of actors, UCB (Lee et al., 2020), and adding the action noise in different proportions. These experiments are presented in Appendix B.2.
Exploitation
Used: The evaluation policy averages all the actors’ output actions to provide stable performance.
Not used: We tried picking an action with the biggest value estimate (average of the critics’ Qfunctions) in evaluation (Huang et al., 2017).
Interestingly, both policies had similar results, see Appendix B.3.
Action normalization
Used: We use the action normalization introduced by Wang et al. (2020).
Not used: We experimented with the observations and rewards normalization, which turned out to be unnecessary. The experiments are presented in Appendix B.4.
Q-function updates
Used: We do 50 SGD updates (ADAM optimizer (Kingma and Ba, 2015), MSE loss) to the actors and the critics every 50 environment interactions, use Clipped Double Q-Learning (Fujimoto et al., 2018).
Not used: We also examined doing the updates at the end of each episode (with the proportional number of updates), using the Hubert loss, and doing weighted Bellman backups (Lee et al., 2020). However, we found them to bring no improvement to our method, as presented in Appendix B.5.
4 EXPERIMENTS
In this section, we present our comprehensive study and the resulting insights. The rest of the experiments verifying that our design choices perform better than alternatives are in Appendix B. Unless stated otherwise, a solid line in the figures represents an average, while a shaded region shows a 95% bootstrap confidence interval. We used 30 seeds for ED2 and the baselines and 7 seeds for the ED2 variants.
4.1 THE NORMALLY DISTRIBUTED ACTION NOISE, COMMONLY USED FOR EXPLORATION, HINDERS TRAINING
In this experiment, we deprive SOP of its exploration mechanism, namely additive normal action noise, and call this variant deterministic SOP (det. SOP). It causes relatively minor deterioration in the Humanoid performance, has no significant influence on the Hopper or Walker performance, and substantially improves the Ant performance, see Figure 2. This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training.
ED2 leverages this insight and constructs an ensemble of deterministic SOP agents presented in Section 3. Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.
ED2 achieves state-of-the-art performance on the OpenAI Gym MuJoCo suite. Figure 4 shows the results of ED2 contrasted with three strong baselines: SUNRISE (Lee et al., 2020), SOP (Wang et al., 2020), and SAC(Haarnoja et al., 2018b).
For completeness, we plot the Humanoid velocities in Figure 5 which shows that our method accelerates to a much higher velocity than the baselines.
4.2 THE CURRENT STATE-OF-THE-ART METHODS ARE UNSTABLE UNDER SEVERAL STABILITY CRITERIA
We consider three notions of stability: inference stability, asymptotic performance stability, and training stability. ED2 outperforms baselines in each of these notions, as discussed below. Similar metrics were also studied in Chan et al. (2020).
Inference stability We say that an agent is inference stable if, when run multiple times, it achieves similar test performance every time. We measure inference stability using the standard deviation of test returns explained in Section 2. We found that that the existing methods train policies that are surprisingly sensitive to the randomness in the environment initial conditions2. Figure 4 and Figure 6 show that ED2 successfully mitigates this problem. By the end of the training, ED2 produces results within 1% of the average performance on Humanoid, while the performance of SUNRISE, SOP, and SAC may vary as much as 11%.
2The MuJoCo suite is overall deterministic, nevertheless, little stochasticity is injected at the beginning of each trajectory, see Appendix D for details.
Asymptotic performance stability We say that an agent achieves asymptotic performance stability if it achieves similar test performance across multiple training runs starting from different initial networks weights. Figure 7 shows that ED2 has a significantly smaller variance than the other methods while maintaining high performance.
Training stability We will consider training stable if performance does not severely deteriorate from one evaluation to the next. We define the root mean squared deterioration metric (RMSD) as follows:
RMSD = √√√√ 1 M M∑ i=1 ( max(R̄i−20 − R̄i, 0) )2 ,
where M is the number of the evaluation phases during training and R̄i is the average test return at the i-th evaluation phase (described in Section 2). We compare returns 20 evaluation phases apart to ensure that the deterioration in performance doesn’t stem from the evaluation variance. ED2 has the lowest RMSD across all tasks, see Figure 8.
4.3 THE CRITICS’ INITIALIZATION PLAYS A MAJOR ROLE IN ENSEMBLE-BASED ACTOR-CRITIC EXPLORATION, WHILE THE TRAINING IS MOSTLY INVARIANT TO THE
ACTORS’ INITIALIZATION
In this experiment, actors’ weights are initialized with the same random values (contrary to the standard case of different initialization). Moreover, we test a corresponding case with critics’ weights initialized with the same random values or simply training only a single critic.
Figure 9 indicates that the choice of actors initialization does not matter in all tasks but Humanoid. Although the average performance on Humanoid seems to be better, it is also less stable. This is quite interesting because the actors are deterministic. Therefore, the exploration must come from the fact that each actor is trained to optimize his own critic.
On the other, Figure 9 shows that the setup with the single critic severely impedes the agent performance. We suspect that using the single critic impairs the agent exploration capabilities as its actors’ policies, trained to maximize the same critic’s Q-function, become very similar.
4.4 THE APPROXIMATED POSTERIOR SAMPLING EXPLORATION OUTPERFORMS APPROXIMATED UCB EXPLORATION COMBINED WITH WEIGHTED BELLMAN BACKUP
ED2 uses posterior sampling based exploration method (Osband et al., 2016). SUNRISE, on the other hand, approximates the Upper Confidence Bound (UCB) exploration technique and does weighted Bellman backups (Lee et al., 2020). For the fair comparison between ED2 and SUNRISE, we substitute the SUNRISE base algorithm SAC for the SOP algorithm used by ED2. We call this variant SUNRISE-SOP.
We test both methods on the standard MuJoCo benchmarks as well as delayed (Zheng et al., 2018a) and sparse (Plappert et al., 2018) rewards variants. Both variations make the environments harder from the exploration standpoint. In the delayed version, the rewards are accumulated and returned to the agent only every 10 time-steps. In the sparse version, the reward for the forward motion is returned to the agent only after it crosses the threshold of one unit on the x-axis. For a better perspective, a fully trained Humanoid is able to move to around five units until the end of the episode. All the other reward components (living reward, control cost, and contact cost) remain unchanged. The results are presented in Table 1.
ED2 outperforms the non-ensemble method SOP, supporting the argument of coherent and temporallyextended exploration of ED2. Moreover, we observe that performance in MuJoCo environments benefits from the ED2 approximate Bayesian posterior sampling exploration (Osband et al., 2013) in contrast to the approximated UCB in SUNRISE, which follows the OFU principle. The posterior sampling is proved to be theoretically superior to the OFU strategy (Osband and Van Roy, 2017).
The experiment where the ED2’s exploration mechanism is replaced for UCB is in Appendix B.2. This variant also achieves worse results than ED2. The additional exploration efficiency experiment in the custom Humanoid environment, where an agent has to find and reach a goal position, is in Appendix A.
4.5 THE WEIGHTED BELLMAN BACKUP CAN NOT REPLACE THE CLIPPED DOUBLE Q-LEARNING
We applied the weighted Bellman backups proposed by Lee et al. (2020) to our method. It is suggested that the method mitigates error propagation in Q-learning by re-weighting the Bellman backup based on uncertainty estimates from an ensemble of target Q-functions (i.e. variance of predictions). Interestingly, Figure 10 does not show this positive effect on ED2.
Our method uses clipped double Q-Learning to mitigate overestimation in Q-functions (Fujimoto et al., 2018). We wanted to check if it is required and if it can be exchanged for the weighted Bellman backups used by Lee et al. (2020). Figure 11 shows that clipped double Q-Learning is required and that the weighted Bellman backups can not replace it.
5 RELATED WORK
Off-policy RL Recently, multiple deep RL algorithms for continuous control have been proposed, e.g. DDPG (Lillicrap et al., 2016), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018b), SOP (Wang et al., 2020), SUNRISE (Lee et al., 2020). They provide a variety of methods for improving training quality, including double-Q bias reduction van Hasselt et al. (2016), target policy smoothing or different update frequencies for actor and critic Fujimoto et al. (2018), entropy regularization Haarnoja et al. (2018b), action normalization Wang et al. (2020), prioritized experience replay Wang et al. (2020), weighted Bellman backups Kumar et al. (2020); Lee et al. (2020), and use of ensembles Osband et al. (2019); Lee et al. (2020); Kurutach et al. (2018); Chua et al. (2018).
Ensembles Deep ensembles are a practical approximation of a Bayesian posterior, offering improved accuracy and uncertainty estimation Lakshminarayanan et al. (2017); Fort et al. (2019). They
inspired a variety of methods in deep RL. They are often used for temporally-extended exploration; see the next paragraph. Other than that, ensembles of different TD-learning algorithms were used to calculate better Q-learning targets (Chen et al., 2018). Others proposed to combine the actions and value functions of different RL algorithms Wiering and van Hasselt (2008) or the same algorithm with different hyper-parameters Huang et al. (2017). For mixing the ensemble components, complex self-adaptive confidence mechanisms were proposed in Zheng et al. (2018b). Our method is simpler: it uses the same algorithm with the same hyper-parameters without any complex or learnt mixing mechanism. Lee et al. (2020) proposed a unified framework for ensemble learning in deep RL (SUNRISE) which uses bootstrap with random initialization Osband et al. (2016) similarly to our work. We achieve better results than SUNRISE and show in Appendix B that their UCB exploration and weighted Bellman backups do not aid our algorithm performance.
Exploration Various frameworks have been developed to balance exploration and exploitation in RL. The optimism in the face of uncertainty principle Lai and Robbins (1985); Bellemare et al. (2016) assigns an overly optimistic value to each state-action pair, usually in the form of an exploration bonus reward, to promote visiting unseen areas of the environment. The maximum entropy method Haarnoja et al. (2018a) encourages the policy to be stochastic, hence boosting exploration. In the parameter space approach Plappert et al. (2018); Fortunato et al. (2018), noise is added to the network weights, which can lead to temporally-extended exploration and a richer set of behaviours. Posterior sampling Strens (2000); Osband et al. (2016; 2018) methods have similar motivations. They stem from the Bayesian perspective and rely on selecting the maximizing action among sampled and statistically plausible set of action values. The ensemble approach Lowrey et al. (2018); Miłoś et al. (2019); Lee et al. (2020) trains multiple versions of the agent, which yields a diverse set of behaviours and can be viewed as an instance of posterior sampling RL.
6 CONCLUSIONS
We conduct a comprehensive empirical analysis of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. We believe that the findings can be useful to RL researchers. Additionally, we propose Ensemble Deep Deterministic Policy Gradients (ED2), an ensemble-based off-policy RL algorithm, which achieves state-of-the-art performance and addresses several issues found during the aforementioned study.
7 REPRODUCIBILITY STATEMENT
We have made a significant effort to make our results reproducible. We use 30 random seeds, which is above the currently popular choice in the field (up to 5 seeds). Furthermore, we systematically explain our design choices in Section 3 and we provide a detailed pseudo-code of our method in Algorithm 3 in the Appendix B. Additionally, we open-sourced the code for the project3 together with examples of how to reproduce the main experiments. The implementation details are explained in Appendix E and extensive information about the experimental setup is given in Appendix D.
3https://github.com/ed2-paper/ED2
A EXPLORATION EFFICIENCY IN THE CUSTOM HUMANOID ENVIRONMENT
To check the exploration capabilities of our method, we constructed two environments based on Humanoid where the goal is not only to move forward as fast as possible but to find and get to the specific region. The environments are described in Figure 12.
Because the Humanoid initial state is slightly perturbed every run, we compare solved rates over multiple runs, see details in Appendix D. Figure 13 compares the solved rates of our method and the three baselines. Our method outperforms the baselines. For this experiment, our method uses the prior networks (Osband et al., 2018).
B DESIGN CHOICES
In this section, we summarize the empirical evaluation of various design choices grouped by topics related to an ensemble of agents (B.1), exploration (B.2), exploitation (B.3), normalization (B.4), and Q-function updates (B.5). In the plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in case of ED2 (ours) and 7 seeds otherwise. All of these experiments test ED2 presented in Section 3 with Algorithm 2 used for evaluation (the ensemble critic variant). We call Algorithm 2 a ’vote policy’.
Algorithm 2 Vote policy 1: Input: ensemble size K; policy θk and Q-function φk,1 parameters where k ∈ [1, . . . ,K]; max
action scale M ; 2: function VOTE_POLICY(s, c)
ak = M tanh (µθk(s)) for k ∈ [1, . . . ,K] (1)
3: if use arbitrary critic then
qk = Qφc,1(s, ak) for k ∈ [1, . . . ,K] (2)
4: else use ensemble critic
qk = 1
K ∑ i∈[1...K] Qφi,1(s, ak) for k ∈ [1, . . . ,K] (3)
5: return ak for k = argmaxk qk
B.1 ENSEMBLE
Prior networks We tested if our algorithm can benefit from prior networks (Osband et al., 2018). It turned out that the results are very similar on OpenAI Gym MuJoCo tasks, see Figure 14. However, the prior networks are useful on our crafted hard-exploration Humanoid environments, see Figure 15.
Ensemble size Figure 16 shows ED2 with different ensemble sizes. As can be seen, the ensemble of size 5 (which we use in ED2) achieves good results, striking a balance between performance and computational overhead.
Data bootstrap Osband et al. (2016) and Lee et al. (2020) remark that training an ensemble of agents using the same training data but with different initialization achieves, in most cases, better performance than applying different training samples to each agent. We confirm this observation in Figure 17. Data bootstrap assigned each transition to each agent in the ensemble with 50% probability.
SOP bigger networks and training intensity We checked if simply training SOP with bigger networks or with higher training intensity (a number of updates made for each collected transition) can get it close to the ED2 results. Figure 18 compares ED2 to SOP with different network sizes, while Figure 19 compares ED2 to SOP with one or five updates per environment step. It turns out that bigger networks or higher training intensity does not improve SOP performance.
B.2 EXPLORATION
Vote policy In this experiment, we used the so-called "vote policy" described in Algorithm 2. We use it for action selection in step 5 of Algorithm 3 in two variations: (1) where the random critic, chosen for the duration of one episode, evaluates each actor’s action or (2) with the full ensemble of critics for actors actions evaluation. Figure 20 shows that the arbitrary critic is not much different from our method. However, in the case of the ensemble critic, we observe a significant performance drop suggesting deficient exploration.
UCB We tested the UCB exploration method from Lee et al. (2020). This method defines an upper-confidence bound (UCB) based on the mean and variance of Q-functions in an ensemble and selects actions with the highest UCB for efficient exploration. Figure 21 shows that the UCB exploration method makes the results of our algorithm worse.
Gaussian noise While our method uses ensemble-based temporally coherent exploration, the most popular choice of exploration is injecting i.i.d. noise (Fujimoto et al., 2018; Wang et al., 2020). We evaluate if these two approaches can be used together. We used Gaussian noise with the standard deviation of 0.29, it is the default value in Wang et al. (2020). We found that the effects are taskspecific, barely visible for Hopper and Walker, positive in the case of Humanoid, and negative for Ant – see Figure 22. In a more refined experiment, we varied the noise level. With more noise the Humanoid results are better, whereas the And results are worse – see Figure 23.
B.3 EXPLOITATION
We used the vote policy, see Algorithm 2, as the evaluation policy in step 21 of Algorithm 3. Figure 24 shows that the vote policy does worse on the OpenAI Gym MuJoCo tasks. However, on our custom Humanoid tasks introduced in Section 4, it improves our agent performance – see Figure 25.
B.4 NORMALIZATION
We validated if rewards or observations normalization (Andrychowicz et al., 2020a) help our method. In both cases, we keep the empirical mean and standard deviation of each reward/observation coordinate, based on all rewards/observations seen so far, and normalize rewards/observations by subtracting the empirical mean and dividing by the standard deviation. It turned out that only the observations normalization significantly helps the agent on Humanoid, see Figures 26 and 27. The action normalization influence is tested in Appendix C.
B.5 Q-FUNCTION UPDATES
Huber loss We tried using the Huber loss for the Q-function training. It makes the results on all tasks worse, see Figure 28.
C ABLATION STUDY
In this section, we ablate the ED2 components to see their impact on performance and stability. We start with the ensemble exploration and exploitation and then move on to the action normalization and the ERE replay buffer. In all plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in all but action normalization and ERE replay buffer experiments, where we run 7 seeds.
Exploration & Exploitation In the first experiment we wanted to isolate the effect of ensemblebased temporally coherent exploration on the performance and stability of ED2. Figures 29-32 compare the performance and stability of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for evaluation in step 21 of Algorithm 3. It is worth noting that the action selection during the data collection, step 5 in Algorithm 3, is left unchanged – the ensemble of actors is used for exploration and each actor is trained on all the data. This should isolate the effect of exploration on
the test performance of every actor. The results show that the performance improvement and stability of ED2 does not come solely from the efficient exploration. ED2 ablation performs comparably to the baseline and is even less stable.
In the next experiment, we wanted to check if the ensemble evaluation is all we need in that event. Figure 33 compares the performance of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for the data collection in step 5 of Algorithm 3. The action selection during the evaluation, step 21 in Algorithm 3, is left unchanged – the ensemble of actors is trained on the data collected only by one of the actors. We add Gaussian noise to the single actor’s actions for exploration as described in Appendix B.2. The results show that the ensemble actor test performance collapses, possibly because of training on the out of distribution data. This implies that the ensemble of actors, used for evaluation, improves the test performance and stability. However, it is required that the same ensemble of actors is also used for exploration, during the data collection.
Action normalization The implementation details of the action normalization are described in Appendix E. Figure 34 shows that the action normalization is especially required on the Ant and Humanoid environments, while not disrupting the training on the other tasks.
ERE replay buffer The implementation details of the ERE replay buffer are described in Appendix E. In Figure 35 we observe that it improves the final performance of ED2 on all tasks, especially on Walker2d and Humanoid.
D EXPERIMENTAL SETUP
Plots In all evaluations, we used 30 evaluation episodes to better access the average performance of each policy, as described in Section 2. For a more pleasant look and easier visual assessment, we smoothed the lines using an exponential moving average with a smoothing factor equal 0.4.
OpenAI Gym MuJoCo In MuJoCo environments, that we used, a state is defined by (x, y, z) position and velocity of the robot’s root, and angular position and velocity of each of its joints. The observation holds almost all information from the state except the x and y position of the robot’s root. The action is a torque that should be applied to each joint of the robot. Sizes of those spaces for each environment are summarised in Table 2.
MuJoCo is a deterministic physics engine thus all simulations conducted inside it are deterministic. This includes simulations of our environments. However, to simplify the process of data gathering and to counteract over-fitting the authors of OpenAI Gym decided to introduce some stochasticity. Each episode starts from a slightly different state - initial positions and velocities are perturbed with random noise (uniform or normal depending on the particular environment).
E IMPLEMENTATION DETAILS
Architecture and hyper-parameters In our experiments, we use deep neural networks with two hidden layers, each of them with 256 units. All of the networks use ReLU as an activation, except on the final output layer, where the activation used varies depending on the model: critic networks use no activation, while actor networks use tanh() multiplied by the max action scale. Table 3 shows the hyper-parameters used for the tested algorithms.
Action normalization Our algorithm employs action normalization proposed by Wang et al. (2020). It means that before applying the squashing function (e.g. tanh()), the outputs of each actor network are normalized in the following way: let µ = (µ1, . . . , µA) be the output of the actor’s network and let G = ∑A i=1 |µi|/A be the average magnitude of this output, where A is the action’s dimensionality. If G > 1 then we normalize the output by setting µi to µi/G for all i = 1, . . . , A. Otherwise, we leave the output unchanged. Each actor’s outputs are normalized independently from other actors in the ensemble.
Algorithm 3 ED2 - Ensemble Deep Deterministic Policy Gradients Input: ensemble sizeK; init. policy θk andQ-functions φk,1, φk,2 param. where k ∈ [1, . . . ,K]; replay buffer D; max action scale M ; target smoothing std. dev. σ; interpolation factor ρ;
1: Set the target parameters φ̄k,1 ← φk,1, φ̄k,2 ← φk,2 2: Sample the current policy index c ∼ U([1, . . . ,K]). 3: Reset the environment and observe the state s. 4: repeat 5: Execute action a = M tanh (µθc(s)) . µ uses the action normalization 6: Observe and store (s, a, r, s′, d) in the replay buffer D. 7: Set s← s′ 8: if episode is finished then 9: Reset the environment and observe initial state s. 10: Sample the current policy index c ∼ U([1, . . . ,K]). 11: if time to update then 12: for as many as steps done in the environment do 13: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 14: Compute targets
yk(r, s ′, d) = r + γ(1− d) min
i=1,2 Qφ̄k,i(s
′, a′k)
a′k = M tanh (µθk(s ′) + ) , ∼ N (0, σ)
15: Update the Q-functions by one step of gradient descent using
∇φk,i 1 |B| ·K ∑
(s,a,r,s′,d)∈B
( Qφk,i(s, a)− yk(r, s′, d) )2 for i ∈ {1, 2}, k ∈ [1, . . . ,K]
16: Update the policies by one step of gradient ascent using
∇θk 1 |B| ·K ∑ s∈B Qφk,1(s, µθk(s)) for k ∈ [1, . . . ,K]
17: Update target parameters with
φ̄k,i ← ρφ̄k,i + (1− ρ)φk,i for i ∈ {1, 2}, k ∈ [1, . . . ,K]
18: if time to evaluate then 19: for specified number of evaluation runs do 20: Reset the environment and observe the state s. 21: Execute policy a = 1K ∑K i=1M tanh (µθi(s)) until the terminal state. 22: Record and log the return. 23: until convergence
Emphasizing Recent Experience We implement the Emphasizing Recent Experience (ERE) mechanism from Wang et al. (2020). ERE samples non-uniformly from the most recent experiences stored in the replay buffer. Let B be the number of mini-batch updates and |D| be the size of the replay buffer. When performing the gradient updates, we sample from the most recent cb data points stored in the replay buffer, where cb = |D| · ηb 1000 B for b = 1, . . . , B.
The hyper-parameter η starts off with a set value of η0 and is later adapted based on the improvements in the agent training performance. Let Irecent be the improvement in terms of training episode returns made over the last |D|/2 time-steps and Imax be the maximum of such improvements over the course of the training. We adapt η according to the formula:
η = η0 · Irecent Imax + 1− Irecent Imax
Our implementation uses the exponentially weighted moving average to store the value of Irecent. More concretely, we define Irecent based on two additional parameters Rrecent and Rprev so that Irecent = Rrecent −Rprev . Those parameters are then updated whenever we receive a new training episode return ep_ret:
Rrecent = λrecent · ep_ret+ (1− λrecent) ·Rrecent Rprev = λprev · ep_ret+ (1− λprev) ·Rprev
where λprev = T/b |D|2 c, λrecent = 10 · λprev and T is the maximum length of an episode.
Hardware During the training of our models, we employ only CPUs using a cluster where each node has 28 available cores of 2.6 GHz, alongside at least 64 GB of memory. The running time of a typical experiment did not exceed 24 hours. | 1. What is the main contribution of the paper regarding ensemble-based actor-critic methods?
2. What are the strengths and weaknesses of the proposed algorithm ED2 compared to existing methods?
3. How does the reviewer assess the fairness and expensiveness of the experiments conducted in the paper?
4. What are the reviewer's concerns regarding the ablative studies and hyperparameter search?
5. Can the authors provide more details about the computational expense of ED2 and how it compares to other algorithms?
6. Why was det. SOP not included in some of the figures, and how does it compare to ED2 in terms of performance and computational expense?
7. The reviewer raises several questions regarding the exploration and diversity of actors in ED2. Can the authors clarify or provide insights into these aspects?
8. What is the motivation behind having a separate evaluation phase for performance measurement, and how does it relate to online learning scenarios? | Summary Of The Paper
Review | Summary Of The Paper
This paper has two main contributions: it introduces an ensemble-based actor-critic method, and it answers some pertinent questions in policy optimization by focusing on its different components. The ensemble is different from multi-actor learners that interact with multiple environments simultaneously, violating the standard RL setup. Instead, the learner of this paper maintains multiple actors and critics but uses only a single actor at a time to interact with the environment. All actors and critics are trained on a common replay buffer. The base method is the streamlined off-policy (SOP) method, which unlike soft actor-critic (SAC) doesn’t use an entropy bonus. Additionally, no exploration noise is added, resulting in their Ensemble Deep Deterministic (ED2) method.
The proposed algorithm ED2 is shown to be superior and more stable in performance according to different measures compared to existing methods. It is also revealed that actor initialization affects performance less than critic initialization. ED2 uses deterministic actors, and its exploration comes from sampling among the actors. Such a form of exploration is also shown to be superior to UCB-style exploration.
Review
Strength:
The main strength of the work is introducing a straightforward extension of an existing base actor-critic method that substantially outperforms existing algorithms in standard benchmark tasks. The new algorithm also has some desirable properties for policy optimization such as not having random additive noise and providing stable performance.
Moreover, some key insights on deep policy gradient methods are presented such as the contribution of actor and critic initialization.
Another strength of the paper is its focus on various details such as ideas that didn’t work as well as ablative studies in various manners.
Weakness:
The main weakness of the work is the fairness of the experiments. This deficiency is common in many papers including those that get published in top conferences but acceptance doesn’t justify wrong choices. I would be willing to know from the authors their thoughts on it.
The first issue with such experiments is that only a single hyper-parameters are used for the methods to show comparison. However, the claims concluded from such results are that one algorithm outperformed the others. However, to make such a claim, different hyper-parameter values should be tried for all algorithms. Otherwise, the claim should be humbler such as our method outperforms the competing methods with default choices of hyper-parameters and such. Such hyper-parameter search is also necessary for ablative studies. When we are removing one component at a time, we cannot assume that the default hyper-parameter configuration of the original method will still be effective for the subsequent variants.
I understand that it will make deep RL experiments much more expensive. However, it isn’t necessary to perform a grid search over hyper-parameters. It has been shown before that random search can give close to the best performance within a handful of trials of configurations, which will considerably reduce the search cost.
In a similar vein, it has been common to compare algorithms with different computational profiles. However, when a new algorithm is computationally way more expensive than the competitors, is it fair to compare them with such computational disparity and claim one algorithm is better than the other?
Details on the computational expense of ED2 are not given. How many actors are used for ED2? How much more expensive ED2 is compared to its competitors such as SAC or SOP?
Other comments:
To understand more clearly, det. SOP uses no exploration whatsoever and still performs well on these tasks?
Both the action averaging and greedy choice among actors yielded similar results. This leads me to suspect whether the actors either converged to similar performant behavior or stationary behavior with zero torque, which upon averaging gives a behavior similar to the greedy one.
Considering the performance and the computational expense compared to ED2, det. SOP seems a strong contender. Why is it not added to Figures 6, 7, or 8?
Figure 9 somewhat makes sense except that there is a puzzle. When ED2 is reduced to single critic, the diversity is reduced considerably, which hurts possibly the exploration and reduces performance substantially. But if we reduce ED2 single critic further by also having a single actor, then don’t we get det. SOP, which wasn’t doing as badly as ED2 single critic? How can that be explained?
What's really the motivation behind having a separate evaluation phase just to measure performance for plot when learning online? If these algorithms are deployed to learn online say on a robot, their online performance is the actual evaluation. Creating an additional evaluation phase to measure performance will only delay its learning in real-time. In what case, such a separate evaluation is useful other than because many other works repeat it? Even if there is a case, isn't it quite restrictive? Wouldn't it be important to see the online performance of ED2 as it randomly draws actors to interact? If that performance is also good, it would be a more interesting and stronger result. |
ICLR | Title
Continuous Control With Ensemble Deep Deterministic Policy Gradients
Abstract
The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics’ initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors’ initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from OpenAI Gym MuJoCo. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox.
1 INTRODUCTION
Recently, deep reinforcement learning (RL) has achieved multiple breakthroughs in a range of challenging domains (e.g. Silver et al. (2016); Berner et al. (2019); Andrychowicz et al. (2020b); Vinyals et al. (2019)). A part of this success is related to an ever-growing toolbox of tricks and methods that were observed to boost the RL algorithms’ performance (e.g. Hessel et al. (2018); Haarnoja et al. (2018b); Fujimoto et al. (2018); Wang et al. (2020); Osband et al. (2019)). This state of affairs benefits the field but also brings challenges related to often unclear interactions between the individual improvements and the credit assignment related to the overall performance of the algorithm Andrychowicz et al. (2020a); Ilyas et al. (2020).
In this paper, we present a comprehensive empirical study of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. These are presented in Section 4 and Appendix B. Our insights include:
• The normally distributed action noise, commonly used for exploration, hinders training. • The current state-of-the-art methods are unstable under several stability criteria. • The critics’ initialization plays a major role in ensemble-based actor-critic exploration, while
the training is mostly invariant to the actors’ initialization. • The approximated posterior sampling exploration (Osband et al., 2013) outperforms approx-
imated UCB exploration combined with weighted Bellman backup (Lee et al., 2020). • The weighted Bellman backup (Lee et al., 2020) can not replace the clipped double Q-
Learning (Fujimoto et al., 2018). To address some of the issues listed above, we introduce the Ensemble Deep Deterministic Policy Gradient (ED2) algorithm1, see Section 3. ED2 brings together existing RL tools in a novel way: it is
1Our code is based on SpinningUp (Achiam, 2018). We open-source it at: https://github.com/ ed2-paper/ED2.
an off-policy algorithm for continuous control, which constructs an ensemble of streamlined versions of TD3 agents and achieves the state-of-the-art performance in OpenAI Gym MuJoCo, substantially improving the results on the two hardest tasks – Ant and Humanoid. Consequently, ED2 does not require knowledge outside of the existing RL toolbox, is conceptually straightforward, and easy to code.
2 BACKGROUND
We model the environment as a Markov Decision Process (MDP). It is defined by the tuple (S,A, R, P, γ, p0), where S is a continuous multi-dimensional state space, A denotes a continuous multi-dimensional action space, P is a transition kernel, γ ∈ [0, 1) stands for a discount factor, p0 refers to an initial state distribution, and R is a reward function. The agent learns a policy from sequences of transitions τ = [(st, at, rt, st+1, d)]Tt=0, called episodes or trajectories, where at ∼ π(·|st), st+1 ∼ P (·|st, at), rt = R(st, at, st+1), d is a terminal signal, and T is the terminal time-step. A stochastic policy π(a|s) maps each state to a distribution over actions. A deterministic policy µ : S −→ A assigns each state an action. All algorithms that we consider in this paper use a different policy for collecting data (exploration) and a different policy for evaluation (exploitation). In order to keep track of the progress, the evaluation runs are performed every ten thousand environment interactions. Because of the environments’ stochasticity, we run the evaluation policy multiple times. Let {Ri}Ni=1 be a set of (undiscounted) returns from N evaluation episodes {τi}Ni=1, i.e. Ri = ∑ rt∈τi rt. We evaluate the
policy using the average test return R̄ = 1N ∑N i=1Ri and the standard deviation of the test returns
σ = √
1 N−1 ∑N i=1(Ri − R̄)2.
We run experiments on four continuous control tasks and their variants, introduced in the appropriate sections, from the OpenAI Gym MuJoCo suite (Brockman et al., 2016) presented in Figure 1. The agent observes vectors that describe the kinematic properties of the robot and its actions specify torques to be applied on the robot joints. See Appendix D for the details on the experimental setup.
3 ENSEMBLE DEEP DETERMINISTIC POLICY GRADIENTS
For completeness of exposition, we present ED2 before the experimental section. The ED2 architecture is based on an ensemble of Streamlined Off-Policy (SOP) agents (Wang et al., 2020), meaning that our agent is an ensemble of TD3-like agents (Fujimoto et al., 2018) with the action normalization and the ERE replay buffer. The pseudo-code listing can be found in Algorithm 1, while the implementation details, including a more verbose version of pseudo-code (Algorithm 3), can be found in Appendix E. In the data collection phase (Lines 1-9), ED2 selects one actor from the ensemble uniformly at random (Lines 1 and 9) and run its deterministic policy for the course of one episode (Line 4). In the evaluation phase (not shown in Algorithm 1), the evaluation policy averages all the actors’ output actions. We train the ensemble every 50 environment steps with 50 stochastic gradient descent updates (Lines 10-13). ED2 concurrently learns K · 2 Q-functions, Qφk,1 and Qφk,2 where k ∈ K, by mean square Bellman error minimization, in almost the same way that SOP learns its two Q-functions. The only difference is that we have K critic pairs that are initialized with different random weights and then trained independently with the same batches of data. Because of the different initial weights, each Q-function has a different bias in its Q-values. The K actors, πθk , train maximizing their corresponding first critic, Qφk,1 , just like SOP.
Algorithm 1 ED2 - Ensemble Deep Deterministic Policy Gradients Input: init. params for policy θk and Q-functions φk,1, φk,2, k ∈ [1...K]; replay buffer D;
1: Sample the current policy index c ∼ U([1...K]). 2: Reset the environment and observe the state s. 3: repeat 4: Execute action a = µθc(s) . µ uses the action normalization 5: Observe and store (s, a, r, s′, d) in the replay buffer D. 6: Set s← s′ 7: if episode is finished then 8: Reset the environment and observe initial state s. 9: Sample the current policy index c ∼ U([1...K]).
10: if time to update then 11: for as many as steps done in the environment do 12: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 13: Update the parameters θk, φk,1 and φk,2 by one gradient step. 14: until convergence
Utilizing the ensembles requires several design choices, which we summarize below. The ablation study of ED2 elements is provided in Appendix C.
Ensemble
Used: We train the ensemble of 5 actors and 5 critics; each actor learns from its own critic and the whole ensemble is trained on the same data.
Not used: We considered different actor-critic configurations, initialization schemes and relations, as well as the use of random prior networks (Osband et al., 2018), data bootstrap (Osband et al., 2016), and different ensemble sizes. We also change the SOP network sizes and training intensity instead of using the ensemble. Besides the prior networks in some special cases, these turn out to be inferior as shown in Section 4 and Appendix B.1.
Exploration
Used: We pick one actor uniformly at random to collect the data for the course of one episode. The actor is deterministic (no additive action noise is applied). These two choices ensure coherent and temporally-extended exploration similarly to Osband et al. (2016).
Not used: We tested several approaches to exploration: using the ensemble of actors, UCB (Lee et al., 2020), and adding the action noise in different proportions. These experiments are presented in Appendix B.2.
Exploitation
Used: The evaluation policy averages all the actors’ output actions to provide stable performance.
Not used: We tried picking an action with the biggest value estimate (average of the critics’ Qfunctions) in evaluation (Huang et al., 2017).
Interestingly, both policies had similar results, see Appendix B.3.
Action normalization
Used: We use the action normalization introduced by Wang et al. (2020).
Not used: We experimented with the observations and rewards normalization, which turned out to be unnecessary. The experiments are presented in Appendix B.4.
Q-function updates
Used: We do 50 SGD updates (ADAM optimizer (Kingma and Ba, 2015), MSE loss) to the actors and the critics every 50 environment interactions, use Clipped Double Q-Learning (Fujimoto et al., 2018).
Not used: We also examined doing the updates at the end of each episode (with the proportional number of updates), using the Hubert loss, and doing weighted Bellman backups (Lee et al., 2020). However, we found them to bring no improvement to our method, as presented in Appendix B.5.
4 EXPERIMENTS
In this section, we present our comprehensive study and the resulting insights. The rest of the experiments verifying that our design choices perform better than alternatives are in Appendix B. Unless stated otherwise, a solid line in the figures represents an average, while a shaded region shows a 95% bootstrap confidence interval. We used 30 seeds for ED2 and the baselines and 7 seeds for the ED2 variants.
4.1 THE NORMALLY DISTRIBUTED ACTION NOISE, COMMONLY USED FOR EXPLORATION, HINDERS TRAINING
In this experiment, we deprive SOP of its exploration mechanism, namely additive normal action noise, and call this variant deterministic SOP (det. SOP). It causes relatively minor deterioration in the Humanoid performance, has no significant influence on the Hopper or Walker performance, and substantially improves the Ant performance, see Figure 2. This result shows that no additional exploration mechanism, often in a form of an exploration noise (Lillicrap et al., 2016; Fujimoto et al., 2018; Wang et al., 2020), is required for the diverse data collection and it can even hinder training.
ED2 leverages this insight and constructs an ensemble of deterministic SOP agents presented in Section 3. Figure 3 shows that ED2 magnifies the beneficial effect coming from the deterministic exploration.
ED2 achieves state-of-the-art performance on the OpenAI Gym MuJoCo suite. Figure 4 shows the results of ED2 contrasted with three strong baselines: SUNRISE (Lee et al., 2020), SOP (Wang et al., 2020), and SAC(Haarnoja et al., 2018b).
For completeness, we plot the Humanoid velocities in Figure 5 which shows that our method accelerates to a much higher velocity than the baselines.
4.2 THE CURRENT STATE-OF-THE-ART METHODS ARE UNSTABLE UNDER SEVERAL STABILITY CRITERIA
We consider three notions of stability: inference stability, asymptotic performance stability, and training stability. ED2 outperforms baselines in each of these notions, as discussed below. Similar metrics were also studied in Chan et al. (2020).
Inference stability We say that an agent is inference stable if, when run multiple times, it achieves similar test performance every time. We measure inference stability using the standard deviation of test returns explained in Section 2. We found that that the existing methods train policies that are surprisingly sensitive to the randomness in the environment initial conditions2. Figure 4 and Figure 6 show that ED2 successfully mitigates this problem. By the end of the training, ED2 produces results within 1% of the average performance on Humanoid, while the performance of SUNRISE, SOP, and SAC may vary as much as 11%.
2The MuJoCo suite is overall deterministic, nevertheless, little stochasticity is injected at the beginning of each trajectory, see Appendix D for details.
Asymptotic performance stability We say that an agent achieves asymptotic performance stability if it achieves similar test performance across multiple training runs starting from different initial networks weights. Figure 7 shows that ED2 has a significantly smaller variance than the other methods while maintaining high performance.
Training stability We will consider training stable if performance does not severely deteriorate from one evaluation to the next. We define the root mean squared deterioration metric (RMSD) as follows:
RMSD = √√√√ 1 M M∑ i=1 ( max(R̄i−20 − R̄i, 0) )2 ,
where M is the number of the evaluation phases during training and R̄i is the average test return at the i-th evaluation phase (described in Section 2). We compare returns 20 evaluation phases apart to ensure that the deterioration in performance doesn’t stem from the evaluation variance. ED2 has the lowest RMSD across all tasks, see Figure 8.
4.3 THE CRITICS’ INITIALIZATION PLAYS A MAJOR ROLE IN ENSEMBLE-BASED ACTOR-CRITIC EXPLORATION, WHILE THE TRAINING IS MOSTLY INVARIANT TO THE
ACTORS’ INITIALIZATION
In this experiment, actors’ weights are initialized with the same random values (contrary to the standard case of different initialization). Moreover, we test a corresponding case with critics’ weights initialized with the same random values or simply training only a single critic.
Figure 9 indicates that the choice of actors initialization does not matter in all tasks but Humanoid. Although the average performance on Humanoid seems to be better, it is also less stable. This is quite interesting because the actors are deterministic. Therefore, the exploration must come from the fact that each actor is trained to optimize his own critic.
On the other, Figure 9 shows that the setup with the single critic severely impedes the agent performance. We suspect that using the single critic impairs the agent exploration capabilities as its actors’ policies, trained to maximize the same critic’s Q-function, become very similar.
4.4 THE APPROXIMATED POSTERIOR SAMPLING EXPLORATION OUTPERFORMS APPROXIMATED UCB EXPLORATION COMBINED WITH WEIGHTED BELLMAN BACKUP
ED2 uses posterior sampling based exploration method (Osband et al., 2016). SUNRISE, on the other hand, approximates the Upper Confidence Bound (UCB) exploration technique and does weighted Bellman backups (Lee et al., 2020). For the fair comparison between ED2 and SUNRISE, we substitute the SUNRISE base algorithm SAC for the SOP algorithm used by ED2. We call this variant SUNRISE-SOP.
We test both methods on the standard MuJoCo benchmarks as well as delayed (Zheng et al., 2018a) and sparse (Plappert et al., 2018) rewards variants. Both variations make the environments harder from the exploration standpoint. In the delayed version, the rewards are accumulated and returned to the agent only every 10 time-steps. In the sparse version, the reward for the forward motion is returned to the agent only after it crosses the threshold of one unit on the x-axis. For a better perspective, a fully trained Humanoid is able to move to around five units until the end of the episode. All the other reward components (living reward, control cost, and contact cost) remain unchanged. The results are presented in Table 1.
ED2 outperforms the non-ensemble method SOP, supporting the argument of coherent and temporallyextended exploration of ED2. Moreover, we observe that performance in MuJoCo environments benefits from the ED2 approximate Bayesian posterior sampling exploration (Osband et al., 2013) in contrast to the approximated UCB in SUNRISE, which follows the OFU principle. The posterior sampling is proved to be theoretically superior to the OFU strategy (Osband and Van Roy, 2017).
The experiment where the ED2’s exploration mechanism is replaced for UCB is in Appendix B.2. This variant also achieves worse results than ED2. The additional exploration efficiency experiment in the custom Humanoid environment, where an agent has to find and reach a goal position, is in Appendix A.
4.5 THE WEIGHTED BELLMAN BACKUP CAN NOT REPLACE THE CLIPPED DOUBLE Q-LEARNING
We applied the weighted Bellman backups proposed by Lee et al. (2020) to our method. It is suggested that the method mitigates error propagation in Q-learning by re-weighting the Bellman backup based on uncertainty estimates from an ensemble of target Q-functions (i.e. variance of predictions). Interestingly, Figure 10 does not show this positive effect on ED2.
Our method uses clipped double Q-Learning to mitigate overestimation in Q-functions (Fujimoto et al., 2018). We wanted to check if it is required and if it can be exchanged for the weighted Bellman backups used by Lee et al. (2020). Figure 11 shows that clipped double Q-Learning is required and that the weighted Bellman backups can not replace it.
5 RELATED WORK
Off-policy RL Recently, multiple deep RL algorithms for continuous control have been proposed, e.g. DDPG (Lillicrap et al., 2016), TD3 (Fujimoto et al., 2018), SAC (Haarnoja et al., 2018b), SOP (Wang et al., 2020), SUNRISE (Lee et al., 2020). They provide a variety of methods for improving training quality, including double-Q bias reduction van Hasselt et al. (2016), target policy smoothing or different update frequencies for actor and critic Fujimoto et al. (2018), entropy regularization Haarnoja et al. (2018b), action normalization Wang et al. (2020), prioritized experience replay Wang et al. (2020), weighted Bellman backups Kumar et al. (2020); Lee et al. (2020), and use of ensembles Osband et al. (2019); Lee et al. (2020); Kurutach et al. (2018); Chua et al. (2018).
Ensembles Deep ensembles are a practical approximation of a Bayesian posterior, offering improved accuracy and uncertainty estimation Lakshminarayanan et al. (2017); Fort et al. (2019). They
inspired a variety of methods in deep RL. They are often used for temporally-extended exploration; see the next paragraph. Other than that, ensembles of different TD-learning algorithms were used to calculate better Q-learning targets (Chen et al., 2018). Others proposed to combine the actions and value functions of different RL algorithms Wiering and van Hasselt (2008) or the same algorithm with different hyper-parameters Huang et al. (2017). For mixing the ensemble components, complex self-adaptive confidence mechanisms were proposed in Zheng et al. (2018b). Our method is simpler: it uses the same algorithm with the same hyper-parameters without any complex or learnt mixing mechanism. Lee et al. (2020) proposed a unified framework for ensemble learning in deep RL (SUNRISE) which uses bootstrap with random initialization Osband et al. (2016) similarly to our work. We achieve better results than SUNRISE and show in Appendix B that their UCB exploration and weighted Bellman backups do not aid our algorithm performance.
Exploration Various frameworks have been developed to balance exploration and exploitation in RL. The optimism in the face of uncertainty principle Lai and Robbins (1985); Bellemare et al. (2016) assigns an overly optimistic value to each state-action pair, usually in the form of an exploration bonus reward, to promote visiting unseen areas of the environment. The maximum entropy method Haarnoja et al. (2018a) encourages the policy to be stochastic, hence boosting exploration. In the parameter space approach Plappert et al. (2018); Fortunato et al. (2018), noise is added to the network weights, which can lead to temporally-extended exploration and a richer set of behaviours. Posterior sampling Strens (2000); Osband et al. (2016; 2018) methods have similar motivations. They stem from the Bayesian perspective and rely on selecting the maximizing action among sampled and statistically plausible set of action values. The ensemble approach Lowrey et al. (2018); Miłoś et al. (2019); Lee et al. (2020) trains multiple versions of the agent, which yields a diverse set of behaviours and can be viewed as an instance of posterior sampling RL.
6 CONCLUSIONS
We conduct a comprehensive empirical analysis of multiple tools from the RL toolbox applied to the continuous control in the OpenAI Gym MuJoCo setting. We believe that the findings can be useful to RL researchers. Additionally, we propose Ensemble Deep Deterministic Policy Gradients (ED2), an ensemble-based off-policy RL algorithm, which achieves state-of-the-art performance and addresses several issues found during the aforementioned study.
7 REPRODUCIBILITY STATEMENT
We have made a significant effort to make our results reproducible. We use 30 random seeds, which is above the currently popular choice in the field (up to 5 seeds). Furthermore, we systematically explain our design choices in Section 3 and we provide a detailed pseudo-code of our method in Algorithm 3 in the Appendix B. Additionally, we open-sourced the code for the project3 together with examples of how to reproduce the main experiments. The implementation details are explained in Appendix E and extensive information about the experimental setup is given in Appendix D.
3https://github.com/ed2-paper/ED2
A EXPLORATION EFFICIENCY IN THE CUSTOM HUMANOID ENVIRONMENT
To check the exploration capabilities of our method, we constructed two environments based on Humanoid where the goal is not only to move forward as fast as possible but to find and get to the specific region. The environments are described in Figure 12.
Because the Humanoid initial state is slightly perturbed every run, we compare solved rates over multiple runs, see details in Appendix D. Figure 13 compares the solved rates of our method and the three baselines. Our method outperforms the baselines. For this experiment, our method uses the prior networks (Osband et al., 2018).
B DESIGN CHOICES
In this section, we summarize the empirical evaluation of various design choices grouped by topics related to an ensemble of agents (B.1), exploration (B.2), exploitation (B.3), normalization (B.4), and Q-function updates (B.5). In the plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in case of ED2 (ours) and 7 seeds otherwise. All of these experiments test ED2 presented in Section 3 with Algorithm 2 used for evaluation (the ensemble critic variant). We call Algorithm 2 a ’vote policy’.
Algorithm 2 Vote policy 1: Input: ensemble size K; policy θk and Q-function φk,1 parameters where k ∈ [1, . . . ,K]; max
action scale M ; 2: function VOTE_POLICY(s, c)
ak = M tanh (µθk(s)) for k ∈ [1, . . . ,K] (1)
3: if use arbitrary critic then
qk = Qφc,1(s, ak) for k ∈ [1, . . . ,K] (2)
4: else use ensemble critic
qk = 1
K ∑ i∈[1...K] Qφi,1(s, ak) for k ∈ [1, . . . ,K] (3)
5: return ak for k = argmaxk qk
B.1 ENSEMBLE
Prior networks We tested if our algorithm can benefit from prior networks (Osband et al., 2018). It turned out that the results are very similar on OpenAI Gym MuJoCo tasks, see Figure 14. However, the prior networks are useful on our crafted hard-exploration Humanoid environments, see Figure 15.
Ensemble size Figure 16 shows ED2 with different ensemble sizes. As can be seen, the ensemble of size 5 (which we use in ED2) achieves good results, striking a balance between performance and computational overhead.
Data bootstrap Osband et al. (2016) and Lee et al. (2020) remark that training an ensemble of agents using the same training data but with different initialization achieves, in most cases, better performance than applying different training samples to each agent. We confirm this observation in Figure 17. Data bootstrap assigned each transition to each agent in the ensemble with 50% probability.
SOP bigger networks and training intensity We checked if simply training SOP with bigger networks or with higher training intensity (a number of updates made for each collected transition) can get it close to the ED2 results. Figure 18 compares ED2 to SOP with different network sizes, while Figure 19 compares ED2 to SOP with one or five updates per environment step. It turns out that bigger networks or higher training intensity does not improve SOP performance.
B.2 EXPLORATION
Vote policy In this experiment, we used the so-called "vote policy" described in Algorithm 2. We use it for action selection in step 5 of Algorithm 3 in two variations: (1) where the random critic, chosen for the duration of one episode, evaluates each actor’s action or (2) with the full ensemble of critics for actors actions evaluation. Figure 20 shows that the arbitrary critic is not much different from our method. However, in the case of the ensemble critic, we observe a significant performance drop suggesting deficient exploration.
UCB We tested the UCB exploration method from Lee et al. (2020). This method defines an upper-confidence bound (UCB) based on the mean and variance of Q-functions in an ensemble and selects actions with the highest UCB for efficient exploration. Figure 21 shows that the UCB exploration method makes the results of our algorithm worse.
Gaussian noise While our method uses ensemble-based temporally coherent exploration, the most popular choice of exploration is injecting i.i.d. noise (Fujimoto et al., 2018; Wang et al., 2020). We evaluate if these two approaches can be used together. We used Gaussian noise with the standard deviation of 0.29, it is the default value in Wang et al. (2020). We found that the effects are taskspecific, barely visible for Hopper and Walker, positive in the case of Humanoid, and negative for Ant – see Figure 22. In a more refined experiment, we varied the noise level. With more noise the Humanoid results are better, whereas the And results are worse – see Figure 23.
B.3 EXPLOITATION
We used the vote policy, see Algorithm 2, as the evaluation policy in step 21 of Algorithm 3. Figure 24 shows that the vote policy does worse on the OpenAI Gym MuJoCo tasks. However, on our custom Humanoid tasks introduced in Section 4, it improves our agent performance – see Figure 25.
B.4 NORMALIZATION
We validated if rewards or observations normalization (Andrychowicz et al., 2020a) help our method. In both cases, we keep the empirical mean and standard deviation of each reward/observation coordinate, based on all rewards/observations seen so far, and normalize rewards/observations by subtracting the empirical mean and dividing by the standard deviation. It turned out that only the observations normalization significantly helps the agent on Humanoid, see Figures 26 and 27. The action normalization influence is tested in Appendix C.
B.5 Q-FUNCTION UPDATES
Huber loss We tried using the Huber loss for the Q-function training. It makes the results on all tasks worse, see Figure 28.
C ABLATION STUDY
In this section, we ablate the ED2 components to see their impact on performance and stability. We start with the ensemble exploration and exploitation and then move on to the action normalization and the ERE replay buffer. In all plots, a solid line and a shaded region represent an average and a 95% bootstrap confidence interval over 30 seeds in all but action normalization and ERE replay buffer experiments, where we run 7 seeds.
Exploration & Exploitation In the first experiment we wanted to isolate the effect of ensemblebased temporally coherent exploration on the performance and stability of ED2. Figures 29-32 compare the performance and stability of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for evaluation in step 21 of Algorithm 3. It is worth noting that the action selection during the data collection, step 5 in Algorithm 3, is left unchanged – the ensemble of actors is used for exploration and each actor is trained on all the data. This should isolate the effect of exploration on
the test performance of every actor. The results show that the performance improvement and stability of ED2 does not come solely from the efficient exploration. ED2 ablation performs comparably to the baseline and is even less stable.
In the next experiment, we wanted to check if the ensemble evaluation is all we need in that event. Figure 33 compares the performance of ED2 and one baseline, SOP, to ED2 with the single actor (the first one) used for the data collection in step 5 of Algorithm 3. The action selection during the evaluation, step 21 in Algorithm 3, is left unchanged – the ensemble of actors is trained on the data collected only by one of the actors. We add Gaussian noise to the single actor’s actions for exploration as described in Appendix B.2. The results show that the ensemble actor test performance collapses, possibly because of training on the out of distribution data. This implies that the ensemble of actors, used for evaluation, improves the test performance and stability. However, it is required that the same ensemble of actors is also used for exploration, during the data collection.
Action normalization The implementation details of the action normalization are described in Appendix E. Figure 34 shows that the action normalization is especially required on the Ant and Humanoid environments, while not disrupting the training on the other tasks.
ERE replay buffer The implementation details of the ERE replay buffer are described in Appendix E. In Figure 35 we observe that it improves the final performance of ED2 on all tasks, especially on Walker2d and Humanoid.
D EXPERIMENTAL SETUP
Plots In all evaluations, we used 30 evaluation episodes to better access the average performance of each policy, as described in Section 2. For a more pleasant look and easier visual assessment, we smoothed the lines using an exponential moving average with a smoothing factor equal 0.4.
OpenAI Gym MuJoCo In MuJoCo environments, that we used, a state is defined by (x, y, z) position and velocity of the robot’s root, and angular position and velocity of each of its joints. The observation holds almost all information from the state except the x and y position of the robot’s root. The action is a torque that should be applied to each joint of the robot. Sizes of those spaces for each environment are summarised in Table 2.
MuJoCo is a deterministic physics engine thus all simulations conducted inside it are deterministic. This includes simulations of our environments. However, to simplify the process of data gathering and to counteract over-fitting the authors of OpenAI Gym decided to introduce some stochasticity. Each episode starts from a slightly different state - initial positions and velocities are perturbed with random noise (uniform or normal depending on the particular environment).
E IMPLEMENTATION DETAILS
Architecture and hyper-parameters In our experiments, we use deep neural networks with two hidden layers, each of them with 256 units. All of the networks use ReLU as an activation, except on the final output layer, where the activation used varies depending on the model: critic networks use no activation, while actor networks use tanh() multiplied by the max action scale. Table 3 shows the hyper-parameters used for the tested algorithms.
Action normalization Our algorithm employs action normalization proposed by Wang et al. (2020). It means that before applying the squashing function (e.g. tanh()), the outputs of each actor network are normalized in the following way: let µ = (µ1, . . . , µA) be the output of the actor’s network and let G = ∑A i=1 |µi|/A be the average magnitude of this output, where A is the action’s dimensionality. If G > 1 then we normalize the output by setting µi to µi/G for all i = 1, . . . , A. Otherwise, we leave the output unchanged. Each actor’s outputs are normalized independently from other actors in the ensemble.
Algorithm 3 ED2 - Ensemble Deep Deterministic Policy Gradients Input: ensemble sizeK; init. policy θk andQ-functions φk,1, φk,2 param. where k ∈ [1, . . . ,K]; replay buffer D; max action scale M ; target smoothing std. dev. σ; interpolation factor ρ;
1: Set the target parameters φ̄k,1 ← φk,1, φ̄k,2 ← φk,2 2: Sample the current policy index c ∼ U([1, . . . ,K]). 3: Reset the environment and observe the state s. 4: repeat 5: Execute action a = M tanh (µθc(s)) . µ uses the action normalization 6: Observe and store (s, a, r, s′, d) in the replay buffer D. 7: Set s← s′ 8: if episode is finished then 9: Reset the environment and observe initial state s. 10: Sample the current policy index c ∼ U([1, . . . ,K]). 11: if time to update then 12: for as many as steps done in the environment do 13: Sample a batch of transitions B = {(s, a, r, s′, d)} ⊂ D . uses ERE 14: Compute targets
yk(r, s ′, d) = r + γ(1− d) min
i=1,2 Qφ̄k,i(s
′, a′k)
a′k = M tanh (µθk(s ′) + ) , ∼ N (0, σ)
15: Update the Q-functions by one step of gradient descent using
∇φk,i 1 |B| ·K ∑
(s,a,r,s′,d)∈B
( Qφk,i(s, a)− yk(r, s′, d) )2 for i ∈ {1, 2}, k ∈ [1, . . . ,K]
16: Update the policies by one step of gradient ascent using
∇θk 1 |B| ·K ∑ s∈B Qφk,1(s, µθk(s)) for k ∈ [1, . . . ,K]
17: Update target parameters with
φ̄k,i ← ρφ̄k,i + (1− ρ)φk,i for i ∈ {1, 2}, k ∈ [1, . . . ,K]
18: if time to evaluate then 19: for specified number of evaluation runs do 20: Reset the environment and observe the state s. 21: Execute policy a = 1K ∑K i=1M tanh (µθi(s)) until the terminal state. 22: Record and log the return. 23: until convergence
Emphasizing Recent Experience We implement the Emphasizing Recent Experience (ERE) mechanism from Wang et al. (2020). ERE samples non-uniformly from the most recent experiences stored in the replay buffer. Let B be the number of mini-batch updates and |D| be the size of the replay buffer. When performing the gradient updates, we sample from the most recent cb data points stored in the replay buffer, where cb = |D| · ηb 1000 B for b = 1, . . . , B.
The hyper-parameter η starts off with a set value of η0 and is later adapted based on the improvements in the agent training performance. Let Irecent be the improvement in terms of training episode returns made over the last |D|/2 time-steps and Imax be the maximum of such improvements over the course of the training. We adapt η according to the formula:
η = η0 · Irecent Imax + 1− Irecent Imax
Our implementation uses the exponentially weighted moving average to store the value of Irecent. More concretely, we define Irecent based on two additional parameters Rrecent and Rprev so that Irecent = Rrecent −Rprev . Those parameters are then updated whenever we receive a new training episode return ep_ret:
Rrecent = λrecent · ep_ret+ (1− λrecent) ·Rrecent Rprev = λprev · ep_ret+ (1− λprev) ·Rprev
where λprev = T/b |D|2 c, λrecent = 10 · λprev and T is the maximum length of an episode.
Hardware During the training of our models, we employ only CPUs using a cluster where each node has 28 available cores of 2.6 GHz, alongside at least 64 GB of memory. The running time of a typical experiment did not exceed 24 hours. | 1. What are the interesting findings shown in the paper regarding ensemble deep reinforcement learning?
2. What are the issues with the paper regarding its technical innovation and novelty?
3. How does the reviewer assess the theoretical depth of the paper and its ability to provide explanations for the experimental observations?
4. Are there any inconsistencies or known results in the paper's experiment findings and claims?
5. How does the reviewer evaluate the overall quality and impact of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper conducted an experimental study over a range of tricks that are often exploited to facilitate ensemble deep reinforcement learning. The experiment results show several interesting findings. For example, it was found that commonly used additive action noise may not be necessary for effective exploration. Meanwhile, experiments show that the initialization of critics perhaps has a higher impact on learning performance than the initialization methods adopted for actors. These findings can be quite important to guide future design of more effective ensemble reinforce learning algorithms.
Review
While this paper seems to show some interesting new results related to ensemble reinforcement learning, there are several issues with this paper in this current shape:
The technical innovation of this paper remains largely unclear. It was claimed by the authors in this paper that ED2 brings together existing RL tools in a novel way. However, it is unclear which part of the design of ED2 is truly novel. As far as I am aware, ED2 mainly used existing training techniques and ensemble tricks. A series of experiments were carried out to justify the use of several different tricks in ED2. While the combined use of these tricks might be new in ED2, it is not clear why such a combination is potentially more superior than other possible combinations. Furthermore, since the experiments focus mainly on four benchmark problems, it is questionable whether ED2 can achieve clearly better performance over other ensemble baseline algorithms on a much wider range of reinforcement learning problems. Hence the novelty and technical contribution of this paper may need to be improved.
This paper lacks theoretical depth. The experiment results in the paper only revealed some insights. However, no further theoretical analysis was conducted to verify (or at least partially explain) the experimental observations. For example, on page 6, the authors conjectured that good exploration may come more from the critic. While this sounds interesting, it is unclear why the critic will play such a critical role to induce effective exploration and what the corresponding conditions are for this to happen.
Some experiment findings and the corresponding claims do not appear to be consistent. For example, on page 4, the authors found experimentally that additive normal action noise can substantially improve the Ant performance. They subsequently concluded that additive noise is not required for effective learning. These two claims do not sound consistent. Accordingly, the main findings discovered in the paper may need to be further verified.
Some experiment findings appear to be well-known a priori in the literature. For example, as acknowledged by the authors, posterior sampling techniques can be more effective than the OFU strategy for action selection. Consequently, the technical contribution of the corresponding experiment results does not seem to be sufficiently strong. |
ICLR | Title
On the Efficiency of Deep Neural Networks
Abstract
The efficiency of neural networks is essential in large-scale deployment scenarios such as mobile applications, internet of things, and edge computing. For a given performance requirement, an efficient neural network should use the simplest network architecture with a minimal number of parameters and connections. In this paper, we introduce a framework to analyze and obtain efficient neural networks. In summary, our main contributions are three-fold. Our first contribution is the subnetwork hypothesis to address overfitting issues and help explain the effectiveness of several key techniques in training efficient networks: 1) softmax normalization in output layers may be one major cause of overparameterization; 2) using log likelihood ratio representation in output layers can reduce overfitting; 3) weight decaying and structural regularization can also effectively reduce overfitting. The second contribution is a simple and effective snapshot-based procedure to prune a well-trained network that minimizes overfitting – pruning unimportant weights and connections first, and simply adjust remaining non-weight parameters using the backpropagation algorithm. Besides, the snapshot-based pruning method can also be used to evaluate the efficiency of trained networks. Finally, we hypothesize that there exist lower bounds of the total number of bits for representing parameters and connections regarding performance metrics for a given optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, it is also important to explore the trade-offs between accuracy and the total number of representation bits, when comparing different network architectures and implementations.
1 INTRODUCTION
Deep learning has achieved tremendous success in large-scale machine learning systems, such as big-data analytics (Najafabadi et al., 2015), billion-parameter generative models for natural language processing (Brown et al., 2020; Radford et al., 2019), and computer vision for self-driving cars (Grigorescu et al., 2020). A general trend for recent success is the use of neural networks of ever-increasing model sizes and their exponentially increasing computation power requirement. Training these gigantic neural network models require tens of thousands parallel computing units inside dedicated computer clusters with extremely high transient storage capacity and data synchronization bandwidth. Consequently, some of the state-of-the-art models are only accessible to very few researchers in the machine learning community.
On the other hand, large-scale deployment of machine learning applications in low-power scenarios, such as mobile applications, internet-of-things (IoT), and edge computing, has put more stringent requirements on the efficiency of neural network models. For a given problem and performance metrics, efficient neural network models should have a minimal number of weights and connections, simple network topology and architecture suitable for low-power computing devices, and low data bandwidth and transient storage requirements. It is important to investigate the model efficiency problem to bridge the performance gap between petascale high-end models and low-power neural architectures for large-scale deployment. However, methods and principles for obtaining efficient deep neural networks have not yet been thoroughly studied.
In this paper, we introduce a framework to analyze and obtain efficient deep neural networks. Especially, we identify several key issues in training efficient deep neural networks and propose a new model compression procedure to prune redundant weights and connections. One important in-
sight of our study is the high correlation between overfitting and model efficiency. Overfitting may improve training accuracy, but it can cause overparameterization. In Section 2, we show that softmax output layers can introduce non-deterministic effects to the backpropagation algorithm, yielding redundant subnetworks with exploding numbers of parameters. To solve this problem, we propose the log likelihood ratio (LLR) representation for output layers. We also investigate potential mechanisms for weight decaying and structural regularization to reduce overfitting. Furthermore, we propose a simple and effective snapshot-based pruning procedure to obtain efficient deep neural networks. We empirically validate this novel approach in Section 3 using various deep learning architectures including LeNet, ResNet, and DenseNet, on the MNIST, CIFAR-10, and CIFAR-100 datasets. Based on the empirical results, we further discuss the model efficiency regarding information cost of model representation. Section 4 reviews prior work in regularization, overfitting, and model compression, followed by Section 5 that concludes the paper.
2 EFFICIENT NEURAL NETWORKS
A fundamental assumption of our analysis is that a complex neural network can be decomposed into subnetworks that are responsible for different operation modes. In other words, the complex nonlinear function of a neural network can be decomposed into groups of sub-functions. Each group of sub-functions represents one mode of operation. In this way, the efficiency of a neural network highly depends on the composition and correlation between these groups of sub-functions. Thus, overfitting may be viewed as forming redundant subnetworks that reduce the efficiency of trained networks.
In this section, we shows that a critical step for obtaining efficient neural networks is to eliminate redundant subnetworks by minimizing overfitting. We first analyze the overfitting issues caused by redundant subnetworks and describe potential mitigating mechanisms. Several hypotheses presented in this section will also be empirically validated using experiments in Section 3. Finally, we introduce a novel snapshot-based procedure to obtain efficient deep neural networks by pruning their unimportant weights and connections. This procedure is also used to analyze the efficiency of trained networks.
2.1 SOFTMAX NORMALIZATION
The softmax function, a.k.a. softargmax, is a normalization function often used as the last activation function of a neural network (Bishop, 2006). Let Z = {z0, z1, · · · , zi, · · · } represents the input vector, the softmax output vectorQ = {q0, q1, · · · , qi, · · · } is defined as
qi = exp(zi)∑ k exp(zk)
(1)
where qi ∈ [0, 1] and ∑ i qi = 1. Thus, the normalized output vector can be interpreted as marginal probabilities. The softmax output can be naturally combined with the cross entropy function J = − ∑ i pi log qi, where pi is the target probability. The derivative of J with respect to zi takes a simple form of qi − pi (Goodfellow et al., 2016). The simple probabilistic interpretation and derivative computation make the combination of softmax normalization and cross entropy loss a pervasive choice for multinomial classification problems. However, potential issues using softmax normalization with the backpropagation (BP) algorithm has not been fully investigated.
Suppose a neural network G can be decomposed into two or more smaller subnetworks G = {G0,G1, · · · ,Gm, · · · } with the same feature input X. The final activation Z is the superposition of the subnetwork activation before the softmax normalization in the output layer
Z = M∑ m=0 Ym = M∑ m=0 fm(X) (2)
where fm is the non-linear function representing subnetwork Gm. The decomposition is done according to the final activation without considering intermediate hidden layers. The softmax normalization operation has the following properties regarding the relationship between subnetwork activations (see Appendix A).
1. If the subnetwork activations are linear offset versions of each other, such that Y0 = Y1 − β1 · · · = Ym − βm · · · , the normalization result of the whole network is equivalent to applying the softmax function to the activation of any subnetwork scaled by M : Q = softmax(MYm). Note that the offset between subnetwork activation Ym has no impact on the softmax output. If the activations Ym are linearly semi-correlated, the generalized softmax property is applicable, i.e., that Q ≈ softmax(MYm).
2. If the subnetwork activations are scaled versions of each other, such that Y0 = α1Y1 · · · = αkYk · · · and 1 ≥ α1 ≥ α2 ≥ · · · ≥ αk · · · , the normalization operation is equivalent to applying the softmax function to the scaled principal subnetwork: Q = softmax(SY0), where S = 1+ α1 + α2 + · · · . The softmax normalization allows proportional integration of information. A single subnetwork that has very strong activation (higher prediction probabilities) can dominate over other subnetworks with weak activations. If there are no dominant subnetworks, the total number of contributing subnetworks may be large and the whole network tends to be overparameterized.
In short, the softmax function can act as a super combinator for different modes of the neural network, summing and amplifying weak subnetwork activations. This could partially explain why deep neural networks are so expressive that they are suitable for diverse types of problems. However, when there are redundant subnetworks that produce linearly correlated activations, the softmax normalization function make them indistinguishable from each other. The linearly correlated subnetworks potentially lead to overfitting and overparameterization. We have the following hypothesis regarding the effects of such redundant subnetworks: Hypothesis 1: For deep neural networks, the existence of redundant subnetworks combining with softmax normalization can lead to overfitting and overparameterization when training with the backpropagation algorithm.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, Mishkin & Matas (2016) demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3.
2.2 LLR REPRESENTATION
The softmax normalization is mainly used to convert neural network outputs to probabilities. However, the softmax normalization allows linearly correlated subnetwork activations and potentially introduces overfitting. Therefore, it is desirable to avoid softmax normalization in output layers. It turns out that using the log likelihood ratio (LLR) representation in output layers can avoid normalization and overfitting issues. Given a binary random variable X and P1(X) = {probability X is true}, the LLR for X can be defined as
LLR(X) = log P1(X)
1− P1(X) (3)
Since neural networks can model arbitrary non-linear functions, we can adopt LLR representation for each component of the outputs Y and target labels T . For both multi-class and multi-label classification, the problem can be regarded as multiple binary regression problems adopting the LLR representation for each class. Therefore, output normalization across different classes is not needed, but loss functions need to be changed accordingly – we introduce the bipolar softplus (BSP) loss function as defined in Appendix B.1. We demonstrate that the LLR representation combined with the BSP loss function does not need normalization and avoids the introduction of redundant subnetworks. The choice of loss functions is not mandatory and there may be better alternative loss functions. The optimization of loss functions will be addressed in future study. In this paper, we use empirical results to demonstrate the effectiveness and behavior of this novel scheme. We introduce the following hypothesis regarding the LLR representation: Hypothesis 2: For classification problems with deep neural networks, using the LLR representation for the output layer and the target labels can reduce overfitting and avoid overparameterization compared with softmax normalization.
It is worth emphasizing that the LLR representation has clear physical meanings, which could help the explainability of neural networks. LLR values are symmetrical and centered around zero, which can be regarded as a natural normalization point. Note that LLR values have range (−∞,+∞) and a large magnitude means higher confidence in prediction. Thus, by controlling the LLR magnitude of the target labels, we can introduce regularization to network outputs.
2.3 WEIGHT DECAYING
In previous discussion, potential issues in normalization and representation are analyzed by decomposing the activation in the output layer. In a similar fashion, we can also decompose the weights and activation in each hidden layer as follows.
Suppose feature inputs of a layer can be represented as X = ∑M−1 m=0 Xm, and its weight matrix
W can be decomposed as W = ∑N n=0Wn, where Wn is non-zero weight components, then the activation Z can be decomposed as
Z = M−1∑ m=0 N−1∑ n=0 Am,n = M−1∑ m=0 N−1∑ n=0 (XmWn +B) (4)
where B is the bias vector. When the rectified linear unit (ReLU) non-linear function is adopted, only those activations larger than zero in Eq. 4 are effective. For a given feature input Xk, if there are multiple Ak,n components that have all positive elements, the weight components Wn can be effectively combined to reduce the total number of parameters. The redundant weights components may be different for different input features. The existence of such redundant weight components may become the source of overfitting and overparameterization. The large ones of these redundant Ak,n components also tend to be working in the linear regime of the ReLU function, which effectively reduces the non-linear behavior of the network. To reduce overfitting and redundancy, the weights should have relatively small magnitudes working in the non-linear regime of the ReLU function, hence we have the following hypothesis regarding weight decaying (L2 regularization). Hypothesis 3: Limiting the magnitude of weights using weight decaying can reduce overfitting and overparameterization in deep neural networks when the ReLU activation function is used.
This hypothesis could explain why regularizing weights is an effective technique to improve training performance. Weight decaying should also be separated from loss regularization, which was first discussed in Loshchilov & Hutter (2018). In our experiments, however, their AdamW algorithm turns out to improve the training accuracy by increasing overfitting and overparameterization as shown in Section 3.2.
2.4 STRUCTURAL REGULARIZATION
The underlying assumption in the analysis of weight decaying is that the outputs of fully-connected subnetworks can be freely superimposed with each other. Thus, if the combination of subnetworks are restricted, overfitting issues could be mitigated. A common technique for this purpose is structural restriction of the subnetworks. Some examples are listed in the following.
1. Structural pruning - Various techniques to selectively remove connections from the whole network in training have been proposed and shown to reduce overfitting, such as Wan et al. (2013). In a sense, stochastic gradient descent (SGD) can also be regarded as adopting random structural pruning.
2. Weight sharing - By sharing weights and forcing regular network structures, neural networks become more effective and easier to train. Convolutional neural network (CNN) can be regarded as a prominent type which is often used as feature extraction layers.
3. Micro-architectural design - By adopting certain topology patterns between or within neural network layers, the resulting networks are confined to subsets of fully-connected networks, hence their overfitting issues are mitigated. Skip connections, for example, have been show to improve training speed and performance (He et al., 2016; Huang et al., 2017).
Many existing optimization techniques for training neural networks could be partially explained and further analyzed using the subnetwork analysis. The underlying principle is that by reducing
the initial functional space, the optimization problem becomes less difficult and easier to converge, which explains why micro-architecture design can have significant impact on the performance of neural networks. In Section 3, the effects of structural regularization are partially demonstrated by comparing the efficiency of different network architectures on the same dataset.
2.5 SNAPSHOT-BASED PRUNING
It is well known that neural networks can be made more efficient in terms of computation and storage requirements by pruning some of the unimportant weights. For deep neural networks, the iterative pruning and retraining procedure in Han et al. (2015) has been used for generating efficient neural networks for low-power applications. However, the iterative procedure requires extra computing power and processing time. Furthermore, the iterative procedure often requires manually finetuning pruning thresholds. We discuss two important aspects of pruning neural networks in the following.
1. Important weights - Deciding which weights are important is the first key issue. In general, weights with smaller magnitudes are considered unimportant and can be pruned, but this may not always be the case for different types of components in various network architectures. For example, shared weights in convolutional layers may be more important than weights in fully connected layers. Even the importance of weights in the same layer may not be correlated with their magnitudes.
2. Retrain requirement - After pruning its weights and connections, a pruned neural network usually needs to be adjusted. It is not clear which aspects of the network need to be modified. For the iterative pruning and retrain process, the weights and biases between initial and final iterations may be completely different so that it is hard to analyze the iterative retraining mechanism.
By analyzing experimental data from extensive empirical studies with different datasets and various architectures, we have the following observation regarding these two key aspects for pruning neural networks.
1. Weight distribution - If the neural network is well trained such that overfitting is minimized, the weight magnitude distribution correlates better with the weights’ importance. In other words, weights with smaller magnitudes around zero can be effectively pruned. Different layers and types of components may need different pruning thresholds but they can be easily adjusted using macro network attributes.
2. Essential network - The important weights together with their corresponding connections define the essence of a trained network; therefore, they should be kept unchanged as a snapshot. Only the biases need to be adjusted. Other non-weight parameters, such as batch normalization parameters, may also need to be adjusted as well.
In short, iterative pruning and retraining may not be necessary when a neural network is well-trained. The first step in obtaining efficient neural networks is to adopt efficient training techniques, such as LLR representation, weight decaying, and structural regularization. After pruning unimportant weights and connections from trained snapshots, the next step is to simply adjust remaining nonweight parameters using the BP algorithm. For most architectures, because most of the parameters are connection weights, adjusting the non-weight parameters requires much fewer iterations than the initial training process – usually only a few epochs are enough.
Conversely, the efficiency and quality of a trained network can be evaluated with the effectiveness of parameter pruning. Refinement of optimization algorithms may also be further examined using this pruning procedure. If a network is overparameterized, the performance of its pruned versions deteriorates dramatically as the total number of parameters is reduced. The key discovery here is the high correlation between overfitting and the efficiency of neural networks.
If trained neural networks are not efficient enough initially, combining iterative techniques with the proposed snapshot-based pruning method could be beneficial. For very large networks, it should be noted that using all the methods analyzed in this section may not be enough to yield efficient deep neural networks using a single-shot training procedure.
3 EXPERIMENTS
We empirically analyze the model efficiency trade-offs in deep neural networks as well as the overfitting issues in training neural networks to validate the subnetwork assumption and the analysis of various mitigating methods in previous section. We also demonstrate the effectiveness of the proposed snapshot-based pruning procedure in obtaining and evaluating efficient neural networks.
3.1 METHODOLOGY
The trade-offs between accuracy and total number of parameters are analyzed with various architectures and datasets using the following procedure: 1) the neural network is first trained using different hyper-parameter settings and output representation; 2) the trained networks are pruned using different threshold settings, and non-weight parameters are retrained using the BP algorithm; 3) test accuracy and total number of parameters of the pruned networks are averaged over at least 10 different experiment runs using the same pruning settings.
The pruning thresholds are set according to the standard deviation of weights’ magnitudes. Two different thresholds are used for convolutional layers and linear layers, respectively. For example, the threshold value for convolutional layers is calculated by multiplying the pruning setting with the standard deviation of weights in all convolutional layers. Weights with magnitudes lower than the threshold value are pruned. In all cases, simple linearly-spaced pruning settings are used without further fine-tuning. However, optimization of the pruning settings is possible by taking into account the structural attributes of given network architectures.
Other hyper-parameter settings and detailed analysis of the results are included in Appendix C. In the following, we focus on the efficiency trade-offs using different architectures on several datasets.
3.2 MNIST CLASSIFICATION
We first conduct experiments on the MNIST dataset (LeCun et al., 1998) using the LeNet-300-100 and LeNet-5 architectures (LeCun et al., 2015). In Figure 1 (left), the distribution of weights when training with LLR representation is compared with the case of softmax normalization. Using LLR representation yields a better distribution of weights – the probability of small weights around zero is higher and these weights can be pruned with less impact on the performance. Furthermore, weight decaying can push the weights aggressively towards zero as shown in Figure 1 (right). While softmax normalization primarily affects output layers, the ReLU function may cause overfitting in all layers. Therefore, the effect of weight decaying is more prominent and effective as shown in Figure 1. This observation is consistent with the analysis and hypotheses from Section 2.
Figure 2 shows the trade-off curves for test errors vs. total number of effective parameters for all experiment results on the MNIST dataset. Each curve represents 20 trained networks with the same training settings, each point represents the average total number of weights and average top-1 errors of the pruned networks for each of the 10 different pruning settings. We can see that using LLR representation instead of softmax normalization can reduce the total number
of parameters for the same accuracy requirement. Using weight decaying also significantly improve the efficiency of the trained networks. Using both methods yields the most efficient neural networks with better performance than the ones using the iterative pruning approach from Han et al. (2015), as shown in Table 1-2 in Appendix C.1. Compared with fully-connected networks, convolutional neural networks show better performance partially due to inherent structural regularization.
We found that the AdamW optimizer with weight decaying may increase training accuracy by increasing overfitting and yield less efficient networks, as demonstrated in Figure 3. Compared with previous results, the optimal pruned model sizes are dramatically increased and using weight decaying does not improve the efficiency of trained networks.
3.3 CIFAR-10 CLASSIFICATION
Several ResNet (He et al., 2016) and DenseNet (Huang et al., 2017) architectures are used for experiments on the CIFAR-10 dataset. We compare ResNet with 20/32/56 layers to DenseNet with 40/60/100 layers and a growth rate k = 12. To reduce the total number of parameters, bottleneck layers are enabled for DenseNet. Further comparison and analysis of the overfitting issues are provided in Appendix C.2.
Figure 4 summarizes the experiment results on the CIFAR-10 dataset. A weight-decaying setting of 1e−4 is used with softmax normalization, which is the default setting to obtain the best accuracy results without pruning. For comparison purpose, experiments for LLR representation use a weight-decaying setting of 5e−4. Although using softmax yield the best training accuracy, the trained networks are overparameterized compared with those using LLR representation.
Therefore, the trade-off curves can be used to judge the efficiency of trained networks. Curves closer to the bottom-left region on the figure represent more efficient networks. Both ResNet and DenseNet architectures show similar trends in terms of efficiency. The trade-off curves for the same architecture with different initial model sizes seem to be bounded by a single theoretical curve. In the energy efficient region with small number of parameters, the error rate goes down rapidly with a small increase in the number of parameters; while in the high accuracy region, small
increases in accuracy require exponentially increasing numbers of parameters. In this regard, the ResNet-32 and the DenseNet-60 architectures may offer better alternative trade-offs in efficiency.
3.4 CIFAR-100 CLASSIFICATION
Figure 5 summarizes the experiment results using different ResNet and DenseNet architectures on the CIFAR-100 dataset. For all experiments using either softmax normalization or LLR representation, the same weight-decaying settings are used. In terms of efficiency trade-offs, we see a similar trend as before: linear increase in accuracy tends to require exponential increase in network capacity. We notice that the difference between softmax normalization and LLR representation is less prominent for the DenseNet architecture. One possible reason is that the BSP loss function is not yet fully optimized for the DenseNet architecture. Another possible reason is that the effects of weight decaying are more prominent than softmax normalization for the DenseNet architecture with larger initial model sizes.
4 RELATED WORK
Historically, deep neural networks using sigmoid or hyperbolic tangent activation functions were difficult to train using backpropagation (Rumerlhar, 1986) due to the vanishing gradient problem (Glorot & Bengio, 2010). The introduction of ReLU activation function (Nair & Hinton, 2010) greatly improves training speed for deep learning, yielding improved prediction accuracy in many new applications. However, using the ReLU activation function also tends to introduce overfitting issues as shown in this paper.
Regularization using modified loss functions can alleviate overfitting but with limited effects. Data augmentation is another method to reduce overfitting and improve generalization performance. Dropout, i.e., randomly selected units are dropped during training, was introduced in Hinton et al. (2012) and Srivastava et al. (2014) as an effective method to prevent overfitting. This idea was
extended to randomly dropping connections in Wan et al. (2013). Batch normalization is another method to reduce overfitting and improve training speed. Nevertheless, it is not completely clear how in principle these methods work, and they still can not fully eliminate overfitting issues in deep neural networks.
Overfitting is also related to the size of a neural network. Excessively large networks tend to introduce overfitting, and vice versa. It is also desirable to minimize model sizes for processing speed and systematic scaling purposes. A straightforward way for compressing over-parameterized neural networks is to prune trivial weights and retain only important connections, which is similar to the development of mammalian brain (Rauschecker, 1984). Pruning unimportant weights and connections after training is a common way to obtain efficient neural networks. Early work in Hassibi & Stork (1993); Hassibi et al. (1994); LeCun et al. (1989) uses the statistics from backpropagation to trim trained networks.
Recently, Han et al. (2015) proposed an iterative pruning and re-training procedure for efficient model compression. Similarly, pruning filters were proposed for convolutional networks in Li et al. (2016). However, iterative pruning and re-training is generally difficult, requiring extra processing time and resources. Furthermore, the iterative pruning process is opaque and requires try-and-error in selecting pruning thresholds for parameters in different layers. The lottery ticket hypothesis from Frankle & Carbin (2018) tries to explain why the iterative pruning procedure can work, but the empirical results therein are not conclusive enough.
Alternatively, one-shot pruning techniques try to train sparse neural networks directly without iterative operations (Lee et al., 2019; Zhang & Stadie, 2019; Wang et al., 2020). However, Liu et al. (2019) observe that previous state-of-the-art pruning techniques may not provide better performance compared with randomly initialized networks. Their observations could be partially explained using our subnetwork analysis on structural regularization effects.
5 DISCUSSION AND FUTURE WORK
In this paper, we identify several important issues affecting overfitting in training deep neural networks. The key finding is that reducing overfitting is critical for obtaining efficient neural networks. It is demonstrated with several datasets and network architectures that a simple snapshot-based pruning procedure can generate efficient deep neural networks. However, more empirical validation results using other neural network architectures and larger datasets are required to further validate the proposed approach. Quantizing the parameters will further compress neural network models, which is not considered here for brevity but could be a natural extension in future work.
The snapshot-based retrain method can also be useful in real-world applications, where we only need to store pruned weights and connections, while biases and other optimization parameters can be restored using new datasets. This could be a very important optimization in cloud and edge computing applications. For transfer learning, neural networks trained with old datasets may be effectively retrained using new datasets, given that the underlying neural models are similar in nature.
We further analyze the efficiency trade-offs in training deep neural networks. For a given optimization problem with given objective and dataset, we should consider structural information, in additional to weights, as representation cost of trained networks. For the small-scale network architectures used in this study, few extra parameters are needed to specify the network topology and connections. However, for large-scale networks, the parameters for describing the network topology and connections should also be included in the representation cost of models. When we compare neural network performance, domain-specific knowledge for designing network architectures should be considered as additional information. We hypothesize that there exist lower-bounds of total number of bits for representing parameters and connections with regard to given performance metrics for an optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, we should also explore the trade-offs between accuracy and total number of representation bits when comparing different network architectures and implementations.
Several hypotheses regarding training efficient deep neural networks are put forward and empirically validated with experiments. Although rigorous proofs are not provided, we hope that they will encourage further discussion and research efforts on the trade-offs between model performance and complexity.
6 REPRODUCIBILITY STATEMENT
The authors of this paper regard it critical to ensure all empirical results in this paper can be consistently reproduced. For each experiment case with different parameters and optimization settings, the results are generated with at least 10 runs with different random seed initialization. We also crosscheck our results with different references. Furthermore, for some of the experiments, we have verified the results using several machine learning frameworks including PyTorch, TensorFlow, and Matlab Deep Learning Toolbox. Finally, we will publish the source codes for this work on GitHub and provide bug fixes and updates.
A THE SOFTMAX PROPERTIES
A.1 PROOF OF THE SOFTMAX PROPERTY
Given two linearly correlated vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 is a scalar, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1)∑ k exp(2y0,k + β1)
= exp(β1) exp(2y0,n) exp(β1) ∑ k exp(2y0,k) = exp(2y0,n)∑ k exp(2y0,k)
If we add a third vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1 + y2,n)∑ k exp(2y0,k + β1 + y2,k)
= exp(3y0,n + β1 + β2)∑ k exp(3y0,k + β1 + β2)
= exp(β1 + β2) exp(3y0,n) exp(β1 + β2) ∑ k exp(3y0,k) = exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linear offset versions of each other, such that Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) = softmax(MY0) = softmax(MYm)
A.2 PROOF OF THE GENERALIZED SOFTMAX PROPERTY
Given two vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 = (β1,0, β1,1, · · · , β1,K−1). Without any loss of generality, we assume that Y0,0 ≤ Y0,1 ≤ · · · ≤ Y0,K−1 = Ymax. Define the maximal variation of β1 as δ1 such that |β1,k − β1,n| ≤ δ1 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. If δ1 is insignificant relative to Y0, i.e., that exp(δ1) = o(exp(Ymax)), (5) where o(·) is the little-o notation, we define Y0, Y1 as linearly semi-correlated vectors. Then, each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1,n)∑ k exp(2y0,k + β1,k)
= exp(β1,n) exp(2y0,n) exp(β1,n) ∑ k exp(2y0,k + β1,k − β1,n) = exp(2y0,n)∑
k exp(2y0,k + β1,k − β1,n) .
Note that the denominator of qn is mainly determined by the largest components of Y0, and thus we have the following approximation
qn ≈ exp(2y0,n)∑
k exp(2y0,k + δ1)
≈ exp(2y0,n)∑ k exp(2y0,k)
If we add a third linearly semi-correlated vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, where β2 = (β2,0, β2,1, · · · , β2,K−1) and |β2,k − β2,n| ≤ δ2 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. Then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1,n + y2,n)∑ k exp(2y0,k + β1,k + y2,k)
= exp(3y0,n + β1,n + β2,n)∑ k exp(3y0,k + β1,k + β2,k)
= exp(β1,n + β2,n) exp(3y0,n) exp(β1,n + β2,n) ∑ k exp(3y0,k + β1,k − β1,n + β2,k − β2,n) ≈ exp(3y0,n)∑ k exp(3y0,k + δ1 + δ2) ≈ exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linearly semi-correlated of each other and Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) ≈ softmax(MY0) ≈ softmax(MYm)
Note that the above relation holds as long as the variations in βm is insignificant according to (5), while the magnitudes of βm do not matter.
B LOSS FUNCTIONS FOR LLR REPRESENTATION
B.1 BIPOLAR SOFTPLUS LOSS
Given a set of N neural network outputs {Yn} and corresponding targets {Tn}, the bipolar softplus (BSP) loss is defined as
BSP(Y, T ) = 1
βN N−1∑ n=0 log(1 + e−βsgn(Tn)Yn) (6)
where β is a constant and sgn(x) returns the sign of x as
sgn(x) = 1, x > 0 0, x = 0
−1, x < 0 (7)
C EXPERIMENT SETTINGS AND DETAILED RESULTS
C.1 MNIST CLASSIFICATION
The MNIST dataset of handwritten digits has a training set of 60,000 examples and a test set of 10,000 examples. Each image contains 28× 28 monochrome pixels for one digit. The pixel values are converted to range (0, 1) with dataset normalization.
Two architectures are used in the experiments: 1) the LeNet-300-100 is a three-layer fully connected network with 300 and 100 hidden nodes, 2) the LeNet-5 architecture has two convolutional layers with 20 and 50 filters and two fully connected layers with 800 and 500 hidden nodes.
Data augmentation is used to randomly shift each image horizontally and vertically by 0 or 1 pixel. The batch size for training is set to 128, and the Adam optimizer (Kingma & Ba, 2014) is used with default parameters: α = 0.001, β1 = 0.9, β2 = 0.999,and ε = 10−8. A weight decaying setting of 4e-4 is used for both the LeNet-300-100 and LeNet-5 architectures in corresponding cases. At least 20 runs with random seeds are carried out for each experiment case.
Table 1 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-300-100 architecture. The output layer uses a fixed pruning setting of 0.75, and the hidden layers use pruning settings θk = 1.0 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 1 are obtained using the largest pruning threshold.
Compared with LLR representation, results using softmax normalization have higher training errors and test errors. This indicates that LLR representation can mitigate the overfitting issues and improve accuracy in both training and testing. Figure 6 (left) also compares the effects of overfitting between softmax normalization and LLR representation with the LetNet-300-100 architecture. Using both LLR representation and weight decaying can yield more efficient networks than the iterative method from Han et al. (2015).
Table 2 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-5 architecture. Fully-connected layers use a fixed pruning setting of 1.25. For convolutional layers, the pruning settings are set as θk = 0.5+0.1×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 2 are obtained using the largest pruning threshold. Weight sharing and inherent structural regularization of CNN further mitigate the overfitting issues in training. Using LLR representation and weight decaying, the accuracy of the pruned network is even better than the accuracy in training and testing. The snapshot-based method generates efficient networks with 31K parameters and the state-of-the-art
performance, better than the ones using the iterative method from Han et al. (2015). Figure 6 (right) also compares the effects of overfitting between softmax normalization and LLR representation.
For comparison purpose, the results using the AdamW algorithm from Loshchilov & Hutter (2018) are summarized in Table 3 and 4 for the LeNet-300-100 and LetNet-5 architectures, respectively. The accuracy differences between training and testing are always larger than previous results using weight decaying and the original ADAM algorithm. The results show that using the AdamW algorithm may generate overparameterized networks. Thus, the snapshot-based pruning method can be a valuable tool for evaluating optimization algorithms.
C.2 CIFAR-10 CLASSIFICATION
The CIFAR-10 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32× 32 color images drawn from 10 classes. The data batch size of 128 is used for training. For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. Weight decaying settings of 6e-4 and 5e-4 are used for the ResNet and DenseNet architectures, respectively. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 10 runs with random seeds are carried out for each experiment case.
The top-1 error rates, total number of parameters after pruning, and compression ratio for CIFAR-10 dataset with the ResNet architectures are summarized in Table 5. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5+0.05×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 5 are
obtained using the largest pruning threshold. For all cases, using LLR representation yields better performance and less parameters after pruning. For the ResNet-56 case, using LLR representation with weight decaying reduces the total number of parameters to about 200K without significant loss of performance.
The results in Table 6 show better performance for DenseNet architectures as compared with the ResNet architectures. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 6 are obtained using the largest pruning threshold.
For DenseNet-60 with less than 90K parameters, the performance is comparable to ResNet-56 with 200K parameters. Therefore, overfitting issues with the DenseNet architecture are less prominent than the ResNet architecture. Figure 7 summarize the efficiency trade-offs for both ResNet and DenseNet architectures. Compared with ResNet architecture, the initial DenseNet model sizes are larger, the effects of weight decaying are more prominent than the softmax normalization, and the difference between softmax normalization and LLR representation for DenseNet is smaller.
C.3 CIFAR-100 CLASSIFICATION
The CIFAR-100 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32 × 32 color images drawn from 100 classes. The 100 classes are grouped into 20 superclasses. The data batch size of 128 is used for training.
For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. A weight decaying setting of 5e-4 is used for both the ResNet and DenseNet architectures. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 20 runs with random seeds are carried out for each experiment case.
The top-1 error rates for trained and pruned networks, total number of parameters after pruning, and compression ratio for CIFAR-100 dataset are summarized in Table 7 and 8. Fully-connected
layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 7 and 8 are obtained using the median pruning threshold. | 1. What is the main contribution of the paper, and what are the proposed solutions to address the problem of overparameterization and overfitting in neural networks?
2. What are the strengths and weaknesses of the paper, particularly regarding its motivations, ideas, and results?
3. Do you have any concerns or questions about the mathematical breakdown and empirical results presented in the paper?
4. How does the paper relate to prior works in the field, and how does it differentiate itself from them?
5. Are there any limitations or areas for improvement in the paper that the authors could explore further? | Summary Of The Paper
Review | Summary Of The Paper
The authors present hypotheses on the causes of overparameterization and overfitting in neural networks in support of smaller, more efficient networks. A discussion of how subnetworks within larger networks lead to overfitting and how softmax normalization can contribute to these problems is presented. A log likelihood ratio activation and corresponding loss are proposed to use in place of softmax outputs and a proposal to use weight decay to promote smaller weight magnitudes is also made. Snapshot based adjustment of non-weight parameters in the networks is further recommended after pruning to smaller network sizes. These proposals are explored through a brief mathematical breakdown and empirical results on the MNIST and CIFAR datasets for fully connected and convolutional networks.
Review
The paper provides strong motivations for composing networks that perform more generally and with fewer resources. The ideas proposed are generally sound and warrant further investigation. The results presented on MNIST and CIFAR10/100 highlight the benefits of the proposed log-likelihood ratio as a target over softmax due to a reduction in parameter count with improved accuracy.
While small datasets are necessary to get an idea of how well the proposed techniques work, I'd prefer to see results beyond MNIST / CIFAR to provide additional support of the methods. I believe the results presented in this paper are further too limited to draw demonstrable conclusions as made by the authors. For example, the authors claim that relu activations lead to overfitting and that this is exposed in the paper -- I struggle to see any direct evidence of this from the results presented. The authors note that "overfitting caused by the ReLU function is more prominent than Softmax normalization" but the context around this claim (Figure 1) does not support that -- which networks used relu vs. softmax, where was that change made, and ultimately how did performance change on those? I'm left wondering if the distributions presented in Figure 1 left are substantially different enough to cause notable performance differences / parameter counts.
There is a large issue with the proof presented in A.1 that calls into question the authors' support for softmax normalization being a problem causing overparameterization. The normalization summation in the divisor
∑
k
exp
y
0
,
k
+
y
1
,
k
=
∑
k
exp
2
y
0
,
k
+
β
1
,
k
is indexed on the same
k
as
q
k
, which I believe has led to confusion on pulling out the
β
k
,
1
term for removal. If we instead use
j
as the index it is clear that
β
1
,
j
cannot be pulled from
∑
j
exp
2
y
0
,
j
exp
β
1
,
j
to cancel
β
1
,
k
in the numerator without modifying the other terms in the summation. This mistake has led to oversimplication and removal of constants which I do not think is correct and is repeated further into the second section of the proof.
It is not immediately clear to me how the work of Mishkin and Matas (2016) as referenced by the authors supports the idea that subnetworks are born of non-deterministic effects caused by weight initialization.
Evidence of Loshchilov & Hutter (2018)'s AdamW algorithm increasing overparameterization would strengthen the anecdote at the end of S2.3.
The paper claims A.3 contains parameters (maybe thresholds?) for pruning but I cannot find these thresholds in the appendix. I understand they may be proportional to the variance of trained weights but what is that proportion and how does it need to change for differing types of networks? Was this empirically derived?
Figure 3 right is a repeat of left as mentioned by the authors on the forum but unfortunately I could not see the updated figure.
Weight sharing and reduction as a structural element such as convolutional networks clearly has benefits in reducing overfitting and providing trainable networks (hence deep learning), but there isn't any additional insight provided here as to what types of problems the sharing is necessary for.
Minor grammatical tweaks are necessary in this paper and a proofread needs to be performed. Noted issues: S2.2: paragraph “linear correlated” -> “linearly correlated” “representation combining” -> “representation combined” |
ICLR | Title
On the Efficiency of Deep Neural Networks
Abstract
The efficiency of neural networks is essential in large-scale deployment scenarios such as mobile applications, internet of things, and edge computing. For a given performance requirement, an efficient neural network should use the simplest network architecture with a minimal number of parameters and connections. In this paper, we introduce a framework to analyze and obtain efficient neural networks. In summary, our main contributions are three-fold. Our first contribution is the subnetwork hypothesis to address overfitting issues and help explain the effectiveness of several key techniques in training efficient networks: 1) softmax normalization in output layers may be one major cause of overparameterization; 2) using log likelihood ratio representation in output layers can reduce overfitting; 3) weight decaying and structural regularization can also effectively reduce overfitting. The second contribution is a simple and effective snapshot-based procedure to prune a well-trained network that minimizes overfitting – pruning unimportant weights and connections first, and simply adjust remaining non-weight parameters using the backpropagation algorithm. Besides, the snapshot-based pruning method can also be used to evaluate the efficiency of trained networks. Finally, we hypothesize that there exist lower bounds of the total number of bits for representing parameters and connections regarding performance metrics for a given optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, it is also important to explore the trade-offs between accuracy and the total number of representation bits, when comparing different network architectures and implementations.
1 INTRODUCTION
Deep learning has achieved tremendous success in large-scale machine learning systems, such as big-data analytics (Najafabadi et al., 2015), billion-parameter generative models for natural language processing (Brown et al., 2020; Radford et al., 2019), and computer vision for self-driving cars (Grigorescu et al., 2020). A general trend for recent success is the use of neural networks of ever-increasing model sizes and their exponentially increasing computation power requirement. Training these gigantic neural network models require tens of thousands parallel computing units inside dedicated computer clusters with extremely high transient storage capacity and data synchronization bandwidth. Consequently, some of the state-of-the-art models are only accessible to very few researchers in the machine learning community.
On the other hand, large-scale deployment of machine learning applications in low-power scenarios, such as mobile applications, internet-of-things (IoT), and edge computing, has put more stringent requirements on the efficiency of neural network models. For a given problem and performance metrics, efficient neural network models should have a minimal number of weights and connections, simple network topology and architecture suitable for low-power computing devices, and low data bandwidth and transient storage requirements. It is important to investigate the model efficiency problem to bridge the performance gap between petascale high-end models and low-power neural architectures for large-scale deployment. However, methods and principles for obtaining efficient deep neural networks have not yet been thoroughly studied.
In this paper, we introduce a framework to analyze and obtain efficient deep neural networks. Especially, we identify several key issues in training efficient deep neural networks and propose a new model compression procedure to prune redundant weights and connections. One important in-
sight of our study is the high correlation between overfitting and model efficiency. Overfitting may improve training accuracy, but it can cause overparameterization. In Section 2, we show that softmax output layers can introduce non-deterministic effects to the backpropagation algorithm, yielding redundant subnetworks with exploding numbers of parameters. To solve this problem, we propose the log likelihood ratio (LLR) representation for output layers. We also investigate potential mechanisms for weight decaying and structural regularization to reduce overfitting. Furthermore, we propose a simple and effective snapshot-based pruning procedure to obtain efficient deep neural networks. We empirically validate this novel approach in Section 3 using various deep learning architectures including LeNet, ResNet, and DenseNet, on the MNIST, CIFAR-10, and CIFAR-100 datasets. Based on the empirical results, we further discuss the model efficiency regarding information cost of model representation. Section 4 reviews prior work in regularization, overfitting, and model compression, followed by Section 5 that concludes the paper.
2 EFFICIENT NEURAL NETWORKS
A fundamental assumption of our analysis is that a complex neural network can be decomposed into subnetworks that are responsible for different operation modes. In other words, the complex nonlinear function of a neural network can be decomposed into groups of sub-functions. Each group of sub-functions represents one mode of operation. In this way, the efficiency of a neural network highly depends on the composition and correlation between these groups of sub-functions. Thus, overfitting may be viewed as forming redundant subnetworks that reduce the efficiency of trained networks.
In this section, we shows that a critical step for obtaining efficient neural networks is to eliminate redundant subnetworks by minimizing overfitting. We first analyze the overfitting issues caused by redundant subnetworks and describe potential mitigating mechanisms. Several hypotheses presented in this section will also be empirically validated using experiments in Section 3. Finally, we introduce a novel snapshot-based procedure to obtain efficient deep neural networks by pruning their unimportant weights and connections. This procedure is also used to analyze the efficiency of trained networks.
2.1 SOFTMAX NORMALIZATION
The softmax function, a.k.a. softargmax, is a normalization function often used as the last activation function of a neural network (Bishop, 2006). Let Z = {z0, z1, · · · , zi, · · · } represents the input vector, the softmax output vectorQ = {q0, q1, · · · , qi, · · · } is defined as
qi = exp(zi)∑ k exp(zk)
(1)
where qi ∈ [0, 1] and ∑ i qi = 1. Thus, the normalized output vector can be interpreted as marginal probabilities. The softmax output can be naturally combined with the cross entropy function J = − ∑ i pi log qi, where pi is the target probability. The derivative of J with respect to zi takes a simple form of qi − pi (Goodfellow et al., 2016). The simple probabilistic interpretation and derivative computation make the combination of softmax normalization and cross entropy loss a pervasive choice for multinomial classification problems. However, potential issues using softmax normalization with the backpropagation (BP) algorithm has not been fully investigated.
Suppose a neural network G can be decomposed into two or more smaller subnetworks G = {G0,G1, · · · ,Gm, · · · } with the same feature input X. The final activation Z is the superposition of the subnetwork activation before the softmax normalization in the output layer
Z = M∑ m=0 Ym = M∑ m=0 fm(X) (2)
where fm is the non-linear function representing subnetwork Gm. The decomposition is done according to the final activation without considering intermediate hidden layers. The softmax normalization operation has the following properties regarding the relationship between subnetwork activations (see Appendix A).
1. If the subnetwork activations are linear offset versions of each other, such that Y0 = Y1 − β1 · · · = Ym − βm · · · , the normalization result of the whole network is equivalent to applying the softmax function to the activation of any subnetwork scaled by M : Q = softmax(MYm). Note that the offset between subnetwork activation Ym has no impact on the softmax output. If the activations Ym are linearly semi-correlated, the generalized softmax property is applicable, i.e., that Q ≈ softmax(MYm).
2. If the subnetwork activations are scaled versions of each other, such that Y0 = α1Y1 · · · = αkYk · · · and 1 ≥ α1 ≥ α2 ≥ · · · ≥ αk · · · , the normalization operation is equivalent to applying the softmax function to the scaled principal subnetwork: Q = softmax(SY0), where S = 1+ α1 + α2 + · · · . The softmax normalization allows proportional integration of information. A single subnetwork that has very strong activation (higher prediction probabilities) can dominate over other subnetworks with weak activations. If there are no dominant subnetworks, the total number of contributing subnetworks may be large and the whole network tends to be overparameterized.
In short, the softmax function can act as a super combinator for different modes of the neural network, summing and amplifying weak subnetwork activations. This could partially explain why deep neural networks are so expressive that they are suitable for diverse types of problems. However, when there are redundant subnetworks that produce linearly correlated activations, the softmax normalization function make them indistinguishable from each other. The linearly correlated subnetworks potentially lead to overfitting and overparameterization. We have the following hypothesis regarding the effects of such redundant subnetworks: Hypothesis 1: For deep neural networks, the existence of redundant subnetworks combining with softmax normalization can lead to overfitting and overparameterization when training with the backpropagation algorithm.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, Mishkin & Matas (2016) demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3.
2.2 LLR REPRESENTATION
The softmax normalization is mainly used to convert neural network outputs to probabilities. However, the softmax normalization allows linearly correlated subnetwork activations and potentially introduces overfitting. Therefore, it is desirable to avoid softmax normalization in output layers. It turns out that using the log likelihood ratio (LLR) representation in output layers can avoid normalization and overfitting issues. Given a binary random variable X and P1(X) = {probability X is true}, the LLR for X can be defined as
LLR(X) = log P1(X)
1− P1(X) (3)
Since neural networks can model arbitrary non-linear functions, we can adopt LLR representation for each component of the outputs Y and target labels T . For both multi-class and multi-label classification, the problem can be regarded as multiple binary regression problems adopting the LLR representation for each class. Therefore, output normalization across different classes is not needed, but loss functions need to be changed accordingly – we introduce the bipolar softplus (BSP) loss function as defined in Appendix B.1. We demonstrate that the LLR representation combined with the BSP loss function does not need normalization and avoids the introduction of redundant subnetworks. The choice of loss functions is not mandatory and there may be better alternative loss functions. The optimization of loss functions will be addressed in future study. In this paper, we use empirical results to demonstrate the effectiveness and behavior of this novel scheme. We introduce the following hypothesis regarding the LLR representation: Hypothesis 2: For classification problems with deep neural networks, using the LLR representation for the output layer and the target labels can reduce overfitting and avoid overparameterization compared with softmax normalization.
It is worth emphasizing that the LLR representation has clear physical meanings, which could help the explainability of neural networks. LLR values are symmetrical and centered around zero, which can be regarded as a natural normalization point. Note that LLR values have range (−∞,+∞) and a large magnitude means higher confidence in prediction. Thus, by controlling the LLR magnitude of the target labels, we can introduce regularization to network outputs.
2.3 WEIGHT DECAYING
In previous discussion, potential issues in normalization and representation are analyzed by decomposing the activation in the output layer. In a similar fashion, we can also decompose the weights and activation in each hidden layer as follows.
Suppose feature inputs of a layer can be represented as X = ∑M−1 m=0 Xm, and its weight matrix
W can be decomposed as W = ∑N n=0Wn, where Wn is non-zero weight components, then the activation Z can be decomposed as
Z = M−1∑ m=0 N−1∑ n=0 Am,n = M−1∑ m=0 N−1∑ n=0 (XmWn +B) (4)
where B is the bias vector. When the rectified linear unit (ReLU) non-linear function is adopted, only those activations larger than zero in Eq. 4 are effective. For a given feature input Xk, if there are multiple Ak,n components that have all positive elements, the weight components Wn can be effectively combined to reduce the total number of parameters. The redundant weights components may be different for different input features. The existence of such redundant weight components may become the source of overfitting and overparameterization. The large ones of these redundant Ak,n components also tend to be working in the linear regime of the ReLU function, which effectively reduces the non-linear behavior of the network. To reduce overfitting and redundancy, the weights should have relatively small magnitudes working in the non-linear regime of the ReLU function, hence we have the following hypothesis regarding weight decaying (L2 regularization). Hypothesis 3: Limiting the magnitude of weights using weight decaying can reduce overfitting and overparameterization in deep neural networks when the ReLU activation function is used.
This hypothesis could explain why regularizing weights is an effective technique to improve training performance. Weight decaying should also be separated from loss regularization, which was first discussed in Loshchilov & Hutter (2018). In our experiments, however, their AdamW algorithm turns out to improve the training accuracy by increasing overfitting and overparameterization as shown in Section 3.2.
2.4 STRUCTURAL REGULARIZATION
The underlying assumption in the analysis of weight decaying is that the outputs of fully-connected subnetworks can be freely superimposed with each other. Thus, if the combination of subnetworks are restricted, overfitting issues could be mitigated. A common technique for this purpose is structural restriction of the subnetworks. Some examples are listed in the following.
1. Structural pruning - Various techniques to selectively remove connections from the whole network in training have been proposed and shown to reduce overfitting, such as Wan et al. (2013). In a sense, stochastic gradient descent (SGD) can also be regarded as adopting random structural pruning.
2. Weight sharing - By sharing weights and forcing regular network structures, neural networks become more effective and easier to train. Convolutional neural network (CNN) can be regarded as a prominent type which is often used as feature extraction layers.
3. Micro-architectural design - By adopting certain topology patterns between or within neural network layers, the resulting networks are confined to subsets of fully-connected networks, hence their overfitting issues are mitigated. Skip connections, for example, have been show to improve training speed and performance (He et al., 2016; Huang et al., 2017).
Many existing optimization techniques for training neural networks could be partially explained and further analyzed using the subnetwork analysis. The underlying principle is that by reducing
the initial functional space, the optimization problem becomes less difficult and easier to converge, which explains why micro-architecture design can have significant impact on the performance of neural networks. In Section 3, the effects of structural regularization are partially demonstrated by comparing the efficiency of different network architectures on the same dataset.
2.5 SNAPSHOT-BASED PRUNING
It is well known that neural networks can be made more efficient in terms of computation and storage requirements by pruning some of the unimportant weights. For deep neural networks, the iterative pruning and retraining procedure in Han et al. (2015) has been used for generating efficient neural networks for low-power applications. However, the iterative procedure requires extra computing power and processing time. Furthermore, the iterative procedure often requires manually finetuning pruning thresholds. We discuss two important aspects of pruning neural networks in the following.
1. Important weights - Deciding which weights are important is the first key issue. In general, weights with smaller magnitudes are considered unimportant and can be pruned, but this may not always be the case for different types of components in various network architectures. For example, shared weights in convolutional layers may be more important than weights in fully connected layers. Even the importance of weights in the same layer may not be correlated with their magnitudes.
2. Retrain requirement - After pruning its weights and connections, a pruned neural network usually needs to be adjusted. It is not clear which aspects of the network need to be modified. For the iterative pruning and retrain process, the weights and biases between initial and final iterations may be completely different so that it is hard to analyze the iterative retraining mechanism.
By analyzing experimental data from extensive empirical studies with different datasets and various architectures, we have the following observation regarding these two key aspects for pruning neural networks.
1. Weight distribution - If the neural network is well trained such that overfitting is minimized, the weight magnitude distribution correlates better with the weights’ importance. In other words, weights with smaller magnitudes around zero can be effectively pruned. Different layers and types of components may need different pruning thresholds but they can be easily adjusted using macro network attributes.
2. Essential network - The important weights together with their corresponding connections define the essence of a trained network; therefore, they should be kept unchanged as a snapshot. Only the biases need to be adjusted. Other non-weight parameters, such as batch normalization parameters, may also need to be adjusted as well.
In short, iterative pruning and retraining may not be necessary when a neural network is well-trained. The first step in obtaining efficient neural networks is to adopt efficient training techniques, such as LLR representation, weight decaying, and structural regularization. After pruning unimportant weights and connections from trained snapshots, the next step is to simply adjust remaining nonweight parameters using the BP algorithm. For most architectures, because most of the parameters are connection weights, adjusting the non-weight parameters requires much fewer iterations than the initial training process – usually only a few epochs are enough.
Conversely, the efficiency and quality of a trained network can be evaluated with the effectiveness of parameter pruning. Refinement of optimization algorithms may also be further examined using this pruning procedure. If a network is overparameterized, the performance of its pruned versions deteriorates dramatically as the total number of parameters is reduced. The key discovery here is the high correlation between overfitting and the efficiency of neural networks.
If trained neural networks are not efficient enough initially, combining iterative techniques with the proposed snapshot-based pruning method could be beneficial. For very large networks, it should be noted that using all the methods analyzed in this section may not be enough to yield efficient deep neural networks using a single-shot training procedure.
3 EXPERIMENTS
We empirically analyze the model efficiency trade-offs in deep neural networks as well as the overfitting issues in training neural networks to validate the subnetwork assumption and the analysis of various mitigating methods in previous section. We also demonstrate the effectiveness of the proposed snapshot-based pruning procedure in obtaining and evaluating efficient neural networks.
3.1 METHODOLOGY
The trade-offs between accuracy and total number of parameters are analyzed with various architectures and datasets using the following procedure: 1) the neural network is first trained using different hyper-parameter settings and output representation; 2) the trained networks are pruned using different threshold settings, and non-weight parameters are retrained using the BP algorithm; 3) test accuracy and total number of parameters of the pruned networks are averaged over at least 10 different experiment runs using the same pruning settings.
The pruning thresholds are set according to the standard deviation of weights’ magnitudes. Two different thresholds are used for convolutional layers and linear layers, respectively. For example, the threshold value for convolutional layers is calculated by multiplying the pruning setting with the standard deviation of weights in all convolutional layers. Weights with magnitudes lower than the threshold value are pruned. In all cases, simple linearly-spaced pruning settings are used without further fine-tuning. However, optimization of the pruning settings is possible by taking into account the structural attributes of given network architectures.
Other hyper-parameter settings and detailed analysis of the results are included in Appendix C. In the following, we focus on the efficiency trade-offs using different architectures on several datasets.
3.2 MNIST CLASSIFICATION
We first conduct experiments on the MNIST dataset (LeCun et al., 1998) using the LeNet-300-100 and LeNet-5 architectures (LeCun et al., 2015). In Figure 1 (left), the distribution of weights when training with LLR representation is compared with the case of softmax normalization. Using LLR representation yields a better distribution of weights – the probability of small weights around zero is higher and these weights can be pruned with less impact on the performance. Furthermore, weight decaying can push the weights aggressively towards zero as shown in Figure 1 (right). While softmax normalization primarily affects output layers, the ReLU function may cause overfitting in all layers. Therefore, the effect of weight decaying is more prominent and effective as shown in Figure 1. This observation is consistent with the analysis and hypotheses from Section 2.
Figure 2 shows the trade-off curves for test errors vs. total number of effective parameters for all experiment results on the MNIST dataset. Each curve represents 20 trained networks with the same training settings, each point represents the average total number of weights and average top-1 errors of the pruned networks for each of the 10 different pruning settings. We can see that using LLR representation instead of softmax normalization can reduce the total number
of parameters for the same accuracy requirement. Using weight decaying also significantly improve the efficiency of the trained networks. Using both methods yields the most efficient neural networks with better performance than the ones using the iterative pruning approach from Han et al. (2015), as shown in Table 1-2 in Appendix C.1. Compared with fully-connected networks, convolutional neural networks show better performance partially due to inherent structural regularization.
We found that the AdamW optimizer with weight decaying may increase training accuracy by increasing overfitting and yield less efficient networks, as demonstrated in Figure 3. Compared with previous results, the optimal pruned model sizes are dramatically increased and using weight decaying does not improve the efficiency of trained networks.
3.3 CIFAR-10 CLASSIFICATION
Several ResNet (He et al., 2016) and DenseNet (Huang et al., 2017) architectures are used for experiments on the CIFAR-10 dataset. We compare ResNet with 20/32/56 layers to DenseNet with 40/60/100 layers and a growth rate k = 12. To reduce the total number of parameters, bottleneck layers are enabled for DenseNet. Further comparison and analysis of the overfitting issues are provided in Appendix C.2.
Figure 4 summarizes the experiment results on the CIFAR-10 dataset. A weight-decaying setting of 1e−4 is used with softmax normalization, which is the default setting to obtain the best accuracy results without pruning. For comparison purpose, experiments for LLR representation use a weight-decaying setting of 5e−4. Although using softmax yield the best training accuracy, the trained networks are overparameterized compared with those using LLR representation.
Therefore, the trade-off curves can be used to judge the efficiency of trained networks. Curves closer to the bottom-left region on the figure represent more efficient networks. Both ResNet and DenseNet architectures show similar trends in terms of efficiency. The trade-off curves for the same architecture with different initial model sizes seem to be bounded by a single theoretical curve. In the energy efficient region with small number of parameters, the error rate goes down rapidly with a small increase in the number of parameters; while in the high accuracy region, small
increases in accuracy require exponentially increasing numbers of parameters. In this regard, the ResNet-32 and the DenseNet-60 architectures may offer better alternative trade-offs in efficiency.
3.4 CIFAR-100 CLASSIFICATION
Figure 5 summarizes the experiment results using different ResNet and DenseNet architectures on the CIFAR-100 dataset. For all experiments using either softmax normalization or LLR representation, the same weight-decaying settings are used. In terms of efficiency trade-offs, we see a similar trend as before: linear increase in accuracy tends to require exponential increase in network capacity. We notice that the difference between softmax normalization and LLR representation is less prominent for the DenseNet architecture. One possible reason is that the BSP loss function is not yet fully optimized for the DenseNet architecture. Another possible reason is that the effects of weight decaying are more prominent than softmax normalization for the DenseNet architecture with larger initial model sizes.
4 RELATED WORK
Historically, deep neural networks using sigmoid or hyperbolic tangent activation functions were difficult to train using backpropagation (Rumerlhar, 1986) due to the vanishing gradient problem (Glorot & Bengio, 2010). The introduction of ReLU activation function (Nair & Hinton, 2010) greatly improves training speed for deep learning, yielding improved prediction accuracy in many new applications. However, using the ReLU activation function also tends to introduce overfitting issues as shown in this paper.
Regularization using modified loss functions can alleviate overfitting but with limited effects. Data augmentation is another method to reduce overfitting and improve generalization performance. Dropout, i.e., randomly selected units are dropped during training, was introduced in Hinton et al. (2012) and Srivastava et al. (2014) as an effective method to prevent overfitting. This idea was
extended to randomly dropping connections in Wan et al. (2013). Batch normalization is another method to reduce overfitting and improve training speed. Nevertheless, it is not completely clear how in principle these methods work, and they still can not fully eliminate overfitting issues in deep neural networks.
Overfitting is also related to the size of a neural network. Excessively large networks tend to introduce overfitting, and vice versa. It is also desirable to minimize model sizes for processing speed and systematic scaling purposes. A straightforward way for compressing over-parameterized neural networks is to prune trivial weights and retain only important connections, which is similar to the development of mammalian brain (Rauschecker, 1984). Pruning unimportant weights and connections after training is a common way to obtain efficient neural networks. Early work in Hassibi & Stork (1993); Hassibi et al. (1994); LeCun et al. (1989) uses the statistics from backpropagation to trim trained networks.
Recently, Han et al. (2015) proposed an iterative pruning and re-training procedure for efficient model compression. Similarly, pruning filters were proposed for convolutional networks in Li et al. (2016). However, iterative pruning and re-training is generally difficult, requiring extra processing time and resources. Furthermore, the iterative pruning process is opaque and requires try-and-error in selecting pruning thresholds for parameters in different layers. The lottery ticket hypothesis from Frankle & Carbin (2018) tries to explain why the iterative pruning procedure can work, but the empirical results therein are not conclusive enough.
Alternatively, one-shot pruning techniques try to train sparse neural networks directly without iterative operations (Lee et al., 2019; Zhang & Stadie, 2019; Wang et al., 2020). However, Liu et al. (2019) observe that previous state-of-the-art pruning techniques may not provide better performance compared with randomly initialized networks. Their observations could be partially explained using our subnetwork analysis on structural regularization effects.
5 DISCUSSION AND FUTURE WORK
In this paper, we identify several important issues affecting overfitting in training deep neural networks. The key finding is that reducing overfitting is critical for obtaining efficient neural networks. It is demonstrated with several datasets and network architectures that a simple snapshot-based pruning procedure can generate efficient deep neural networks. However, more empirical validation results using other neural network architectures and larger datasets are required to further validate the proposed approach. Quantizing the parameters will further compress neural network models, which is not considered here for brevity but could be a natural extension in future work.
The snapshot-based retrain method can also be useful in real-world applications, where we only need to store pruned weights and connections, while biases and other optimization parameters can be restored using new datasets. This could be a very important optimization in cloud and edge computing applications. For transfer learning, neural networks trained with old datasets may be effectively retrained using new datasets, given that the underlying neural models are similar in nature.
We further analyze the efficiency trade-offs in training deep neural networks. For a given optimization problem with given objective and dataset, we should consider structural information, in additional to weights, as representation cost of trained networks. For the small-scale network architectures used in this study, few extra parameters are needed to specify the network topology and connections. However, for large-scale networks, the parameters for describing the network topology and connections should also be included in the representation cost of models. When we compare neural network performance, domain-specific knowledge for designing network architectures should be considered as additional information. We hypothesize that there exist lower-bounds of total number of bits for representing parameters and connections with regard to given performance metrics for an optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, we should also explore the trade-offs between accuracy and total number of representation bits when comparing different network architectures and implementations.
Several hypotheses regarding training efficient deep neural networks are put forward and empirically validated with experiments. Although rigorous proofs are not provided, we hope that they will encourage further discussion and research efforts on the trade-offs between model performance and complexity.
6 REPRODUCIBILITY STATEMENT
The authors of this paper regard it critical to ensure all empirical results in this paper can be consistently reproduced. For each experiment case with different parameters and optimization settings, the results are generated with at least 10 runs with different random seed initialization. We also crosscheck our results with different references. Furthermore, for some of the experiments, we have verified the results using several machine learning frameworks including PyTorch, TensorFlow, and Matlab Deep Learning Toolbox. Finally, we will publish the source codes for this work on GitHub and provide bug fixes and updates.
A THE SOFTMAX PROPERTIES
A.1 PROOF OF THE SOFTMAX PROPERTY
Given two linearly correlated vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 is a scalar, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1)∑ k exp(2y0,k + β1)
= exp(β1) exp(2y0,n) exp(β1) ∑ k exp(2y0,k) = exp(2y0,n)∑ k exp(2y0,k)
If we add a third vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1 + y2,n)∑ k exp(2y0,k + β1 + y2,k)
= exp(3y0,n + β1 + β2)∑ k exp(3y0,k + β1 + β2)
= exp(β1 + β2) exp(3y0,n) exp(β1 + β2) ∑ k exp(3y0,k) = exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linear offset versions of each other, such that Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) = softmax(MY0) = softmax(MYm)
A.2 PROOF OF THE GENERALIZED SOFTMAX PROPERTY
Given two vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 = (β1,0, β1,1, · · · , β1,K−1). Without any loss of generality, we assume that Y0,0 ≤ Y0,1 ≤ · · · ≤ Y0,K−1 = Ymax. Define the maximal variation of β1 as δ1 such that |β1,k − β1,n| ≤ δ1 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. If δ1 is insignificant relative to Y0, i.e., that exp(δ1) = o(exp(Ymax)), (5) where o(·) is the little-o notation, we define Y0, Y1 as linearly semi-correlated vectors. Then, each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1,n)∑ k exp(2y0,k + β1,k)
= exp(β1,n) exp(2y0,n) exp(β1,n) ∑ k exp(2y0,k + β1,k − β1,n) = exp(2y0,n)∑
k exp(2y0,k + β1,k − β1,n) .
Note that the denominator of qn is mainly determined by the largest components of Y0, and thus we have the following approximation
qn ≈ exp(2y0,n)∑
k exp(2y0,k + δ1)
≈ exp(2y0,n)∑ k exp(2y0,k)
If we add a third linearly semi-correlated vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, where β2 = (β2,0, β2,1, · · · , β2,K−1) and |β2,k − β2,n| ≤ δ2 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. Then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1,n + y2,n)∑ k exp(2y0,k + β1,k + y2,k)
= exp(3y0,n + β1,n + β2,n)∑ k exp(3y0,k + β1,k + β2,k)
= exp(β1,n + β2,n) exp(3y0,n) exp(β1,n + β2,n) ∑ k exp(3y0,k + β1,k − β1,n + β2,k − β2,n) ≈ exp(3y0,n)∑ k exp(3y0,k + δ1 + δ2) ≈ exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linearly semi-correlated of each other and Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) ≈ softmax(MY0) ≈ softmax(MYm)
Note that the above relation holds as long as the variations in βm is insignificant according to (5), while the magnitudes of βm do not matter.
B LOSS FUNCTIONS FOR LLR REPRESENTATION
B.1 BIPOLAR SOFTPLUS LOSS
Given a set of N neural network outputs {Yn} and corresponding targets {Tn}, the bipolar softplus (BSP) loss is defined as
BSP(Y, T ) = 1
βN N−1∑ n=0 log(1 + e−βsgn(Tn)Yn) (6)
where β is a constant and sgn(x) returns the sign of x as
sgn(x) = 1, x > 0 0, x = 0
−1, x < 0 (7)
C EXPERIMENT SETTINGS AND DETAILED RESULTS
C.1 MNIST CLASSIFICATION
The MNIST dataset of handwritten digits has a training set of 60,000 examples and a test set of 10,000 examples. Each image contains 28× 28 monochrome pixels for one digit. The pixel values are converted to range (0, 1) with dataset normalization.
Two architectures are used in the experiments: 1) the LeNet-300-100 is a three-layer fully connected network with 300 and 100 hidden nodes, 2) the LeNet-5 architecture has two convolutional layers with 20 and 50 filters and two fully connected layers with 800 and 500 hidden nodes.
Data augmentation is used to randomly shift each image horizontally and vertically by 0 or 1 pixel. The batch size for training is set to 128, and the Adam optimizer (Kingma & Ba, 2014) is used with default parameters: α = 0.001, β1 = 0.9, β2 = 0.999,and ε = 10−8. A weight decaying setting of 4e-4 is used for both the LeNet-300-100 and LeNet-5 architectures in corresponding cases. At least 20 runs with random seeds are carried out for each experiment case.
Table 1 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-300-100 architecture. The output layer uses a fixed pruning setting of 0.75, and the hidden layers use pruning settings θk = 1.0 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 1 are obtained using the largest pruning threshold.
Compared with LLR representation, results using softmax normalization have higher training errors and test errors. This indicates that LLR representation can mitigate the overfitting issues and improve accuracy in both training and testing. Figure 6 (left) also compares the effects of overfitting between softmax normalization and LLR representation with the LetNet-300-100 architecture. Using both LLR representation and weight decaying can yield more efficient networks than the iterative method from Han et al. (2015).
Table 2 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-5 architecture. Fully-connected layers use a fixed pruning setting of 1.25. For convolutional layers, the pruning settings are set as θk = 0.5+0.1×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 2 are obtained using the largest pruning threshold. Weight sharing and inherent structural regularization of CNN further mitigate the overfitting issues in training. Using LLR representation and weight decaying, the accuracy of the pruned network is even better than the accuracy in training and testing. The snapshot-based method generates efficient networks with 31K parameters and the state-of-the-art
performance, better than the ones using the iterative method from Han et al. (2015). Figure 6 (right) also compares the effects of overfitting between softmax normalization and LLR representation.
For comparison purpose, the results using the AdamW algorithm from Loshchilov & Hutter (2018) are summarized in Table 3 and 4 for the LeNet-300-100 and LetNet-5 architectures, respectively. The accuracy differences between training and testing are always larger than previous results using weight decaying and the original ADAM algorithm. The results show that using the AdamW algorithm may generate overparameterized networks. Thus, the snapshot-based pruning method can be a valuable tool for evaluating optimization algorithms.
C.2 CIFAR-10 CLASSIFICATION
The CIFAR-10 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32× 32 color images drawn from 10 classes. The data batch size of 128 is used for training. For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. Weight decaying settings of 6e-4 and 5e-4 are used for the ResNet and DenseNet architectures, respectively. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 10 runs with random seeds are carried out for each experiment case.
The top-1 error rates, total number of parameters after pruning, and compression ratio for CIFAR-10 dataset with the ResNet architectures are summarized in Table 5. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5+0.05×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 5 are
obtained using the largest pruning threshold. For all cases, using LLR representation yields better performance and less parameters after pruning. For the ResNet-56 case, using LLR representation with weight decaying reduces the total number of parameters to about 200K without significant loss of performance.
The results in Table 6 show better performance for DenseNet architectures as compared with the ResNet architectures. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 6 are obtained using the largest pruning threshold.
For DenseNet-60 with less than 90K parameters, the performance is comparable to ResNet-56 with 200K parameters. Therefore, overfitting issues with the DenseNet architecture are less prominent than the ResNet architecture. Figure 7 summarize the efficiency trade-offs for both ResNet and DenseNet architectures. Compared with ResNet architecture, the initial DenseNet model sizes are larger, the effects of weight decaying are more prominent than the softmax normalization, and the difference between softmax normalization and LLR representation for DenseNet is smaller.
C.3 CIFAR-100 CLASSIFICATION
The CIFAR-100 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32 × 32 color images drawn from 100 classes. The 100 classes are grouped into 20 superclasses. The data batch size of 128 is used for training.
For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. A weight decaying setting of 5e-4 is used for both the ResNet and DenseNet architectures. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 20 runs with random seeds are carried out for each experiment case.
The top-1 error rates for trained and pruned networks, total number of parameters after pruning, and compression ratio for CIFAR-100 dataset are summarized in Table 7 and 8. Fully-connected
layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 7 and 8 are obtained using the median pruning threshold. | 1. What is the focus of the paper, and what are the proposed approaches for reducing the model size of image classification models?
2. What are the strengths of the paper regarding its open-source code and experimental setup?
3. What are the weaknesses of the paper, particularly in terms of experiment setup, graphical similarities, and lack of proper supports for claims?
4. How does the reviewer assess the connection between the techniques in Section 2 and the experiments in the paper?
5. What are the questions raised by the reviewer regarding the novelty of the work and its contributions? | Summary Of The Paper
Review | Summary Of The Paper
The paper examined a few known techniques for reducing the model size of image classification models. The experiments suggest that models with the log likelihood ratio representation in the output layers perform the best comparing to the softmax baselines.
Review
Strength
According the authors, the code is open sourced and the experiment results were averaged over at least 10 different runs.
Weakness
This paper will become much readable if authors provide more details in experiment setups. It's unclear to me how the authors various the model size in figure 2-4. It's better the detailed steps to reproduce is described clearly in the paper.
The two graphs in Figure 3 (for ResNets and DenseNets) are strikingly similar to each other. For example, DenseNet100-bc should perform much better than ResNet56 (4.5% top-1 error vs 6.6%) in cifar10 top-1 error rate from the literature. But error rates shown in Figure 2 from these two models are both around 7%.
The experiment section is only loosely connected to the rest of the paper. It's unclear how techniques in Section 2 are used to generate models in the experiment section. And the discussion and conclusion section is a bit vague and not quite based on the experimental results. For example, authors' claimed key contribution is that reducing overfitting is critical for obtaining efficient neural networks. However, it's well known that reducing overfitting is critical to every aspect of machine learning. Moreover, to study overfitting, one has at least shows the correlations between training set metrics and evaluation set metrics, which is not present in the current experiments.
Additionally, the authors claimed the snapshot-based pruning is preferred over the iterative pruning and retraining. But such claims are made without proper experiment supports. There is no iterative pruning baseline shown in this paper.
There are a lot of known facts and techniques that are listed in section 1 and 2. It's unclear to me what are the novel contributions from this work. |
ICLR | Title
On the Efficiency of Deep Neural Networks
Abstract
The efficiency of neural networks is essential in large-scale deployment scenarios such as mobile applications, internet of things, and edge computing. For a given performance requirement, an efficient neural network should use the simplest network architecture with a minimal number of parameters and connections. In this paper, we introduce a framework to analyze and obtain efficient neural networks. In summary, our main contributions are three-fold. Our first contribution is the subnetwork hypothesis to address overfitting issues and help explain the effectiveness of several key techniques in training efficient networks: 1) softmax normalization in output layers may be one major cause of overparameterization; 2) using log likelihood ratio representation in output layers can reduce overfitting; 3) weight decaying and structural regularization can also effectively reduce overfitting. The second contribution is a simple and effective snapshot-based procedure to prune a well-trained network that minimizes overfitting – pruning unimportant weights and connections first, and simply adjust remaining non-weight parameters using the backpropagation algorithm. Besides, the snapshot-based pruning method can also be used to evaluate the efficiency of trained networks. Finally, we hypothesize that there exist lower bounds of the total number of bits for representing parameters and connections regarding performance metrics for a given optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, it is also important to explore the trade-offs between accuracy and the total number of representation bits, when comparing different network architectures and implementations.
1 INTRODUCTION
Deep learning has achieved tremendous success in large-scale machine learning systems, such as big-data analytics (Najafabadi et al., 2015), billion-parameter generative models for natural language processing (Brown et al., 2020; Radford et al., 2019), and computer vision for self-driving cars (Grigorescu et al., 2020). A general trend for recent success is the use of neural networks of ever-increasing model sizes and their exponentially increasing computation power requirement. Training these gigantic neural network models require tens of thousands parallel computing units inside dedicated computer clusters with extremely high transient storage capacity and data synchronization bandwidth. Consequently, some of the state-of-the-art models are only accessible to very few researchers in the machine learning community.
On the other hand, large-scale deployment of machine learning applications in low-power scenarios, such as mobile applications, internet-of-things (IoT), and edge computing, has put more stringent requirements on the efficiency of neural network models. For a given problem and performance metrics, efficient neural network models should have a minimal number of weights and connections, simple network topology and architecture suitable for low-power computing devices, and low data bandwidth and transient storage requirements. It is important to investigate the model efficiency problem to bridge the performance gap between petascale high-end models and low-power neural architectures for large-scale deployment. However, methods and principles for obtaining efficient deep neural networks have not yet been thoroughly studied.
In this paper, we introduce a framework to analyze and obtain efficient deep neural networks. Especially, we identify several key issues in training efficient deep neural networks and propose a new model compression procedure to prune redundant weights and connections. One important in-
sight of our study is the high correlation between overfitting and model efficiency. Overfitting may improve training accuracy, but it can cause overparameterization. In Section 2, we show that softmax output layers can introduce non-deterministic effects to the backpropagation algorithm, yielding redundant subnetworks with exploding numbers of parameters. To solve this problem, we propose the log likelihood ratio (LLR) representation for output layers. We also investigate potential mechanisms for weight decaying and structural regularization to reduce overfitting. Furthermore, we propose a simple and effective snapshot-based pruning procedure to obtain efficient deep neural networks. We empirically validate this novel approach in Section 3 using various deep learning architectures including LeNet, ResNet, and DenseNet, on the MNIST, CIFAR-10, and CIFAR-100 datasets. Based on the empirical results, we further discuss the model efficiency regarding information cost of model representation. Section 4 reviews prior work in regularization, overfitting, and model compression, followed by Section 5 that concludes the paper.
2 EFFICIENT NEURAL NETWORKS
A fundamental assumption of our analysis is that a complex neural network can be decomposed into subnetworks that are responsible for different operation modes. In other words, the complex nonlinear function of a neural network can be decomposed into groups of sub-functions. Each group of sub-functions represents one mode of operation. In this way, the efficiency of a neural network highly depends on the composition and correlation between these groups of sub-functions. Thus, overfitting may be viewed as forming redundant subnetworks that reduce the efficiency of trained networks.
In this section, we shows that a critical step for obtaining efficient neural networks is to eliminate redundant subnetworks by minimizing overfitting. We first analyze the overfitting issues caused by redundant subnetworks and describe potential mitigating mechanisms. Several hypotheses presented in this section will also be empirically validated using experiments in Section 3. Finally, we introduce a novel snapshot-based procedure to obtain efficient deep neural networks by pruning their unimportant weights and connections. This procedure is also used to analyze the efficiency of trained networks.
2.1 SOFTMAX NORMALIZATION
The softmax function, a.k.a. softargmax, is a normalization function often used as the last activation function of a neural network (Bishop, 2006). Let Z = {z0, z1, · · · , zi, · · · } represents the input vector, the softmax output vectorQ = {q0, q1, · · · , qi, · · · } is defined as
qi = exp(zi)∑ k exp(zk)
(1)
where qi ∈ [0, 1] and ∑ i qi = 1. Thus, the normalized output vector can be interpreted as marginal probabilities. The softmax output can be naturally combined with the cross entropy function J = − ∑ i pi log qi, where pi is the target probability. The derivative of J with respect to zi takes a simple form of qi − pi (Goodfellow et al., 2016). The simple probabilistic interpretation and derivative computation make the combination of softmax normalization and cross entropy loss a pervasive choice for multinomial classification problems. However, potential issues using softmax normalization with the backpropagation (BP) algorithm has not been fully investigated.
Suppose a neural network G can be decomposed into two or more smaller subnetworks G = {G0,G1, · · · ,Gm, · · · } with the same feature input X. The final activation Z is the superposition of the subnetwork activation before the softmax normalization in the output layer
Z = M∑ m=0 Ym = M∑ m=0 fm(X) (2)
where fm is the non-linear function representing subnetwork Gm. The decomposition is done according to the final activation without considering intermediate hidden layers. The softmax normalization operation has the following properties regarding the relationship between subnetwork activations (see Appendix A).
1. If the subnetwork activations are linear offset versions of each other, such that Y0 = Y1 − β1 · · · = Ym − βm · · · , the normalization result of the whole network is equivalent to applying the softmax function to the activation of any subnetwork scaled by M : Q = softmax(MYm). Note that the offset between subnetwork activation Ym has no impact on the softmax output. If the activations Ym are linearly semi-correlated, the generalized softmax property is applicable, i.e., that Q ≈ softmax(MYm).
2. If the subnetwork activations are scaled versions of each other, such that Y0 = α1Y1 · · · = αkYk · · · and 1 ≥ α1 ≥ α2 ≥ · · · ≥ αk · · · , the normalization operation is equivalent to applying the softmax function to the scaled principal subnetwork: Q = softmax(SY0), where S = 1+ α1 + α2 + · · · . The softmax normalization allows proportional integration of information. A single subnetwork that has very strong activation (higher prediction probabilities) can dominate over other subnetworks with weak activations. If there are no dominant subnetworks, the total number of contributing subnetworks may be large and the whole network tends to be overparameterized.
In short, the softmax function can act as a super combinator for different modes of the neural network, summing and amplifying weak subnetwork activations. This could partially explain why deep neural networks are so expressive that they are suitable for diverse types of problems. However, when there are redundant subnetworks that produce linearly correlated activations, the softmax normalization function make them indistinguishable from each other. The linearly correlated subnetworks potentially lead to overfitting and overparameterization. We have the following hypothesis regarding the effects of such redundant subnetworks: Hypothesis 1: For deep neural networks, the existence of redundant subnetworks combining with softmax normalization can lead to overfitting and overparameterization when training with the backpropagation algorithm.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, Mishkin & Matas (2016) demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3.
2.2 LLR REPRESENTATION
The softmax normalization is mainly used to convert neural network outputs to probabilities. However, the softmax normalization allows linearly correlated subnetwork activations and potentially introduces overfitting. Therefore, it is desirable to avoid softmax normalization in output layers. It turns out that using the log likelihood ratio (LLR) representation in output layers can avoid normalization and overfitting issues. Given a binary random variable X and P1(X) = {probability X is true}, the LLR for X can be defined as
LLR(X) = log P1(X)
1− P1(X) (3)
Since neural networks can model arbitrary non-linear functions, we can adopt LLR representation for each component of the outputs Y and target labels T . For both multi-class and multi-label classification, the problem can be regarded as multiple binary regression problems adopting the LLR representation for each class. Therefore, output normalization across different classes is not needed, but loss functions need to be changed accordingly – we introduce the bipolar softplus (BSP) loss function as defined in Appendix B.1. We demonstrate that the LLR representation combined with the BSP loss function does not need normalization and avoids the introduction of redundant subnetworks. The choice of loss functions is not mandatory and there may be better alternative loss functions. The optimization of loss functions will be addressed in future study. In this paper, we use empirical results to demonstrate the effectiveness and behavior of this novel scheme. We introduce the following hypothesis regarding the LLR representation: Hypothesis 2: For classification problems with deep neural networks, using the LLR representation for the output layer and the target labels can reduce overfitting and avoid overparameterization compared with softmax normalization.
It is worth emphasizing that the LLR representation has clear physical meanings, which could help the explainability of neural networks. LLR values are symmetrical and centered around zero, which can be regarded as a natural normalization point. Note that LLR values have range (−∞,+∞) and a large magnitude means higher confidence in prediction. Thus, by controlling the LLR magnitude of the target labels, we can introduce regularization to network outputs.
2.3 WEIGHT DECAYING
In previous discussion, potential issues in normalization and representation are analyzed by decomposing the activation in the output layer. In a similar fashion, we can also decompose the weights and activation in each hidden layer as follows.
Suppose feature inputs of a layer can be represented as X = ∑M−1 m=0 Xm, and its weight matrix
W can be decomposed as W = ∑N n=0Wn, where Wn is non-zero weight components, then the activation Z can be decomposed as
Z = M−1∑ m=0 N−1∑ n=0 Am,n = M−1∑ m=0 N−1∑ n=0 (XmWn +B) (4)
where B is the bias vector. When the rectified linear unit (ReLU) non-linear function is adopted, only those activations larger than zero in Eq. 4 are effective. For a given feature input Xk, if there are multiple Ak,n components that have all positive elements, the weight components Wn can be effectively combined to reduce the total number of parameters. The redundant weights components may be different for different input features. The existence of such redundant weight components may become the source of overfitting and overparameterization. The large ones of these redundant Ak,n components also tend to be working in the linear regime of the ReLU function, which effectively reduces the non-linear behavior of the network. To reduce overfitting and redundancy, the weights should have relatively small magnitudes working in the non-linear regime of the ReLU function, hence we have the following hypothesis regarding weight decaying (L2 regularization). Hypothesis 3: Limiting the magnitude of weights using weight decaying can reduce overfitting and overparameterization in deep neural networks when the ReLU activation function is used.
This hypothesis could explain why regularizing weights is an effective technique to improve training performance. Weight decaying should also be separated from loss regularization, which was first discussed in Loshchilov & Hutter (2018). In our experiments, however, their AdamW algorithm turns out to improve the training accuracy by increasing overfitting and overparameterization as shown in Section 3.2.
2.4 STRUCTURAL REGULARIZATION
The underlying assumption in the analysis of weight decaying is that the outputs of fully-connected subnetworks can be freely superimposed with each other. Thus, if the combination of subnetworks are restricted, overfitting issues could be mitigated. A common technique for this purpose is structural restriction of the subnetworks. Some examples are listed in the following.
1. Structural pruning - Various techniques to selectively remove connections from the whole network in training have been proposed and shown to reduce overfitting, such as Wan et al. (2013). In a sense, stochastic gradient descent (SGD) can also be regarded as adopting random structural pruning.
2. Weight sharing - By sharing weights and forcing regular network structures, neural networks become more effective and easier to train. Convolutional neural network (CNN) can be regarded as a prominent type which is often used as feature extraction layers.
3. Micro-architectural design - By adopting certain topology patterns between or within neural network layers, the resulting networks are confined to subsets of fully-connected networks, hence their overfitting issues are mitigated. Skip connections, for example, have been show to improve training speed and performance (He et al., 2016; Huang et al., 2017).
Many existing optimization techniques for training neural networks could be partially explained and further analyzed using the subnetwork analysis. The underlying principle is that by reducing
the initial functional space, the optimization problem becomes less difficult and easier to converge, which explains why micro-architecture design can have significant impact on the performance of neural networks. In Section 3, the effects of structural regularization are partially demonstrated by comparing the efficiency of different network architectures on the same dataset.
2.5 SNAPSHOT-BASED PRUNING
It is well known that neural networks can be made more efficient in terms of computation and storage requirements by pruning some of the unimportant weights. For deep neural networks, the iterative pruning and retraining procedure in Han et al. (2015) has been used for generating efficient neural networks for low-power applications. However, the iterative procedure requires extra computing power and processing time. Furthermore, the iterative procedure often requires manually finetuning pruning thresholds. We discuss two important aspects of pruning neural networks in the following.
1. Important weights - Deciding which weights are important is the first key issue. In general, weights with smaller magnitudes are considered unimportant and can be pruned, but this may not always be the case for different types of components in various network architectures. For example, shared weights in convolutional layers may be more important than weights in fully connected layers. Even the importance of weights in the same layer may not be correlated with their magnitudes.
2. Retrain requirement - After pruning its weights and connections, a pruned neural network usually needs to be adjusted. It is not clear which aspects of the network need to be modified. For the iterative pruning and retrain process, the weights and biases between initial and final iterations may be completely different so that it is hard to analyze the iterative retraining mechanism.
By analyzing experimental data from extensive empirical studies with different datasets and various architectures, we have the following observation regarding these two key aspects for pruning neural networks.
1. Weight distribution - If the neural network is well trained such that overfitting is minimized, the weight magnitude distribution correlates better with the weights’ importance. In other words, weights with smaller magnitudes around zero can be effectively pruned. Different layers and types of components may need different pruning thresholds but they can be easily adjusted using macro network attributes.
2. Essential network - The important weights together with their corresponding connections define the essence of a trained network; therefore, they should be kept unchanged as a snapshot. Only the biases need to be adjusted. Other non-weight parameters, such as batch normalization parameters, may also need to be adjusted as well.
In short, iterative pruning and retraining may not be necessary when a neural network is well-trained. The first step in obtaining efficient neural networks is to adopt efficient training techniques, such as LLR representation, weight decaying, and structural regularization. After pruning unimportant weights and connections from trained snapshots, the next step is to simply adjust remaining nonweight parameters using the BP algorithm. For most architectures, because most of the parameters are connection weights, adjusting the non-weight parameters requires much fewer iterations than the initial training process – usually only a few epochs are enough.
Conversely, the efficiency and quality of a trained network can be evaluated with the effectiveness of parameter pruning. Refinement of optimization algorithms may also be further examined using this pruning procedure. If a network is overparameterized, the performance of its pruned versions deteriorates dramatically as the total number of parameters is reduced. The key discovery here is the high correlation between overfitting and the efficiency of neural networks.
If trained neural networks are not efficient enough initially, combining iterative techniques with the proposed snapshot-based pruning method could be beneficial. For very large networks, it should be noted that using all the methods analyzed in this section may not be enough to yield efficient deep neural networks using a single-shot training procedure.
3 EXPERIMENTS
We empirically analyze the model efficiency trade-offs in deep neural networks as well as the overfitting issues in training neural networks to validate the subnetwork assumption and the analysis of various mitigating methods in previous section. We also demonstrate the effectiveness of the proposed snapshot-based pruning procedure in obtaining and evaluating efficient neural networks.
3.1 METHODOLOGY
The trade-offs between accuracy and total number of parameters are analyzed with various architectures and datasets using the following procedure: 1) the neural network is first trained using different hyper-parameter settings and output representation; 2) the trained networks are pruned using different threshold settings, and non-weight parameters are retrained using the BP algorithm; 3) test accuracy and total number of parameters of the pruned networks are averaged over at least 10 different experiment runs using the same pruning settings.
The pruning thresholds are set according to the standard deviation of weights’ magnitudes. Two different thresholds are used for convolutional layers and linear layers, respectively. For example, the threshold value for convolutional layers is calculated by multiplying the pruning setting with the standard deviation of weights in all convolutional layers. Weights with magnitudes lower than the threshold value are pruned. In all cases, simple linearly-spaced pruning settings are used without further fine-tuning. However, optimization of the pruning settings is possible by taking into account the structural attributes of given network architectures.
Other hyper-parameter settings and detailed analysis of the results are included in Appendix C. In the following, we focus on the efficiency trade-offs using different architectures on several datasets.
3.2 MNIST CLASSIFICATION
We first conduct experiments on the MNIST dataset (LeCun et al., 1998) using the LeNet-300-100 and LeNet-5 architectures (LeCun et al., 2015). In Figure 1 (left), the distribution of weights when training with LLR representation is compared with the case of softmax normalization. Using LLR representation yields a better distribution of weights – the probability of small weights around zero is higher and these weights can be pruned with less impact on the performance. Furthermore, weight decaying can push the weights aggressively towards zero as shown in Figure 1 (right). While softmax normalization primarily affects output layers, the ReLU function may cause overfitting in all layers. Therefore, the effect of weight decaying is more prominent and effective as shown in Figure 1. This observation is consistent with the analysis and hypotheses from Section 2.
Figure 2 shows the trade-off curves for test errors vs. total number of effective parameters for all experiment results on the MNIST dataset. Each curve represents 20 trained networks with the same training settings, each point represents the average total number of weights and average top-1 errors of the pruned networks for each of the 10 different pruning settings. We can see that using LLR representation instead of softmax normalization can reduce the total number
of parameters for the same accuracy requirement. Using weight decaying also significantly improve the efficiency of the trained networks. Using both methods yields the most efficient neural networks with better performance than the ones using the iterative pruning approach from Han et al. (2015), as shown in Table 1-2 in Appendix C.1. Compared with fully-connected networks, convolutional neural networks show better performance partially due to inherent structural regularization.
We found that the AdamW optimizer with weight decaying may increase training accuracy by increasing overfitting and yield less efficient networks, as demonstrated in Figure 3. Compared with previous results, the optimal pruned model sizes are dramatically increased and using weight decaying does not improve the efficiency of trained networks.
3.3 CIFAR-10 CLASSIFICATION
Several ResNet (He et al., 2016) and DenseNet (Huang et al., 2017) architectures are used for experiments on the CIFAR-10 dataset. We compare ResNet with 20/32/56 layers to DenseNet with 40/60/100 layers and a growth rate k = 12. To reduce the total number of parameters, bottleneck layers are enabled for DenseNet. Further comparison and analysis of the overfitting issues are provided in Appendix C.2.
Figure 4 summarizes the experiment results on the CIFAR-10 dataset. A weight-decaying setting of 1e−4 is used with softmax normalization, which is the default setting to obtain the best accuracy results without pruning. For comparison purpose, experiments for LLR representation use a weight-decaying setting of 5e−4. Although using softmax yield the best training accuracy, the trained networks are overparameterized compared with those using LLR representation.
Therefore, the trade-off curves can be used to judge the efficiency of trained networks. Curves closer to the bottom-left region on the figure represent more efficient networks. Both ResNet and DenseNet architectures show similar trends in terms of efficiency. The trade-off curves for the same architecture with different initial model sizes seem to be bounded by a single theoretical curve. In the energy efficient region with small number of parameters, the error rate goes down rapidly with a small increase in the number of parameters; while in the high accuracy region, small
increases in accuracy require exponentially increasing numbers of parameters. In this regard, the ResNet-32 and the DenseNet-60 architectures may offer better alternative trade-offs in efficiency.
3.4 CIFAR-100 CLASSIFICATION
Figure 5 summarizes the experiment results using different ResNet and DenseNet architectures on the CIFAR-100 dataset. For all experiments using either softmax normalization or LLR representation, the same weight-decaying settings are used. In terms of efficiency trade-offs, we see a similar trend as before: linear increase in accuracy tends to require exponential increase in network capacity. We notice that the difference between softmax normalization and LLR representation is less prominent for the DenseNet architecture. One possible reason is that the BSP loss function is not yet fully optimized for the DenseNet architecture. Another possible reason is that the effects of weight decaying are more prominent than softmax normalization for the DenseNet architecture with larger initial model sizes.
4 RELATED WORK
Historically, deep neural networks using sigmoid or hyperbolic tangent activation functions were difficult to train using backpropagation (Rumerlhar, 1986) due to the vanishing gradient problem (Glorot & Bengio, 2010). The introduction of ReLU activation function (Nair & Hinton, 2010) greatly improves training speed for deep learning, yielding improved prediction accuracy in many new applications. However, using the ReLU activation function also tends to introduce overfitting issues as shown in this paper.
Regularization using modified loss functions can alleviate overfitting but with limited effects. Data augmentation is another method to reduce overfitting and improve generalization performance. Dropout, i.e., randomly selected units are dropped during training, was introduced in Hinton et al. (2012) and Srivastava et al. (2014) as an effective method to prevent overfitting. This idea was
extended to randomly dropping connections in Wan et al. (2013). Batch normalization is another method to reduce overfitting and improve training speed. Nevertheless, it is not completely clear how in principle these methods work, and they still can not fully eliminate overfitting issues in deep neural networks.
Overfitting is also related to the size of a neural network. Excessively large networks tend to introduce overfitting, and vice versa. It is also desirable to minimize model sizes for processing speed and systematic scaling purposes. A straightforward way for compressing over-parameterized neural networks is to prune trivial weights and retain only important connections, which is similar to the development of mammalian brain (Rauschecker, 1984). Pruning unimportant weights and connections after training is a common way to obtain efficient neural networks. Early work in Hassibi & Stork (1993); Hassibi et al. (1994); LeCun et al. (1989) uses the statistics from backpropagation to trim trained networks.
Recently, Han et al. (2015) proposed an iterative pruning and re-training procedure for efficient model compression. Similarly, pruning filters were proposed for convolutional networks in Li et al. (2016). However, iterative pruning and re-training is generally difficult, requiring extra processing time and resources. Furthermore, the iterative pruning process is opaque and requires try-and-error in selecting pruning thresholds for parameters in different layers. The lottery ticket hypothesis from Frankle & Carbin (2018) tries to explain why the iterative pruning procedure can work, but the empirical results therein are not conclusive enough.
Alternatively, one-shot pruning techniques try to train sparse neural networks directly without iterative operations (Lee et al., 2019; Zhang & Stadie, 2019; Wang et al., 2020). However, Liu et al. (2019) observe that previous state-of-the-art pruning techniques may not provide better performance compared with randomly initialized networks. Their observations could be partially explained using our subnetwork analysis on structural regularization effects.
5 DISCUSSION AND FUTURE WORK
In this paper, we identify several important issues affecting overfitting in training deep neural networks. The key finding is that reducing overfitting is critical for obtaining efficient neural networks. It is demonstrated with several datasets and network architectures that a simple snapshot-based pruning procedure can generate efficient deep neural networks. However, more empirical validation results using other neural network architectures and larger datasets are required to further validate the proposed approach. Quantizing the parameters will further compress neural network models, which is not considered here for brevity but could be a natural extension in future work.
The snapshot-based retrain method can also be useful in real-world applications, where we only need to store pruned weights and connections, while biases and other optimization parameters can be restored using new datasets. This could be a very important optimization in cloud and edge computing applications. For transfer learning, neural networks trained with old datasets may be effectively retrained using new datasets, given that the underlying neural models are similar in nature.
We further analyze the efficiency trade-offs in training deep neural networks. For a given optimization problem with given objective and dataset, we should consider structural information, in additional to weights, as representation cost of trained networks. For the small-scale network architectures used in this study, few extra parameters are needed to specify the network topology and connections. However, for large-scale networks, the parameters for describing the network topology and connections should also be included in the representation cost of models. When we compare neural network performance, domain-specific knowledge for designing network architectures should be considered as additional information. We hypothesize that there exist lower-bounds of total number of bits for representing parameters and connections with regard to given performance metrics for an optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, we should also explore the trade-offs between accuracy and total number of representation bits when comparing different network architectures and implementations.
Several hypotheses regarding training efficient deep neural networks are put forward and empirically validated with experiments. Although rigorous proofs are not provided, we hope that they will encourage further discussion and research efforts on the trade-offs between model performance and complexity.
6 REPRODUCIBILITY STATEMENT
The authors of this paper regard it critical to ensure all empirical results in this paper can be consistently reproduced. For each experiment case with different parameters and optimization settings, the results are generated with at least 10 runs with different random seed initialization. We also crosscheck our results with different references. Furthermore, for some of the experiments, we have verified the results using several machine learning frameworks including PyTorch, TensorFlow, and Matlab Deep Learning Toolbox. Finally, we will publish the source codes for this work on GitHub and provide bug fixes and updates.
A THE SOFTMAX PROPERTIES
A.1 PROOF OF THE SOFTMAX PROPERTY
Given two linearly correlated vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 is a scalar, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1)∑ k exp(2y0,k + β1)
= exp(β1) exp(2y0,n) exp(β1) ∑ k exp(2y0,k) = exp(2y0,n)∑ k exp(2y0,k)
If we add a third vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1 + y2,n)∑ k exp(2y0,k + β1 + y2,k)
= exp(3y0,n + β1 + β2)∑ k exp(3y0,k + β1 + β2)
= exp(β1 + β2) exp(3y0,n) exp(β1 + β2) ∑ k exp(3y0,k) = exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linear offset versions of each other, such that Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) = softmax(MY0) = softmax(MYm)
A.2 PROOF OF THE GENERALIZED SOFTMAX PROPERTY
Given two vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 = (β1,0, β1,1, · · · , β1,K−1). Without any loss of generality, we assume that Y0,0 ≤ Y0,1 ≤ · · · ≤ Y0,K−1 = Ymax. Define the maximal variation of β1 as δ1 such that |β1,k − β1,n| ≤ δ1 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. If δ1 is insignificant relative to Y0, i.e., that exp(δ1) = o(exp(Ymax)), (5) where o(·) is the little-o notation, we define Y0, Y1 as linearly semi-correlated vectors. Then, each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1,n)∑ k exp(2y0,k + β1,k)
= exp(β1,n) exp(2y0,n) exp(β1,n) ∑ k exp(2y0,k + β1,k − β1,n) = exp(2y0,n)∑
k exp(2y0,k + β1,k − β1,n) .
Note that the denominator of qn is mainly determined by the largest components of Y0, and thus we have the following approximation
qn ≈ exp(2y0,n)∑
k exp(2y0,k + δ1)
≈ exp(2y0,n)∑ k exp(2y0,k)
If we add a third linearly semi-correlated vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, where β2 = (β2,0, β2,1, · · · , β2,K−1) and |β2,k − β2,n| ≤ δ2 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. Then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1,n + y2,n)∑ k exp(2y0,k + β1,k + y2,k)
= exp(3y0,n + β1,n + β2,n)∑ k exp(3y0,k + β1,k + β2,k)
= exp(β1,n + β2,n) exp(3y0,n) exp(β1,n + β2,n) ∑ k exp(3y0,k + β1,k − β1,n + β2,k − β2,n) ≈ exp(3y0,n)∑ k exp(3y0,k + δ1 + δ2) ≈ exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linearly semi-correlated of each other and Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) ≈ softmax(MY0) ≈ softmax(MYm)
Note that the above relation holds as long as the variations in βm is insignificant according to (5), while the magnitudes of βm do not matter.
B LOSS FUNCTIONS FOR LLR REPRESENTATION
B.1 BIPOLAR SOFTPLUS LOSS
Given a set of N neural network outputs {Yn} and corresponding targets {Tn}, the bipolar softplus (BSP) loss is defined as
BSP(Y, T ) = 1
βN N−1∑ n=0 log(1 + e−βsgn(Tn)Yn) (6)
where β is a constant and sgn(x) returns the sign of x as
sgn(x) = 1, x > 0 0, x = 0
−1, x < 0 (7)
C EXPERIMENT SETTINGS AND DETAILED RESULTS
C.1 MNIST CLASSIFICATION
The MNIST dataset of handwritten digits has a training set of 60,000 examples and a test set of 10,000 examples. Each image contains 28× 28 monochrome pixels for one digit. The pixel values are converted to range (0, 1) with dataset normalization.
Two architectures are used in the experiments: 1) the LeNet-300-100 is a three-layer fully connected network with 300 and 100 hidden nodes, 2) the LeNet-5 architecture has two convolutional layers with 20 and 50 filters and two fully connected layers with 800 and 500 hidden nodes.
Data augmentation is used to randomly shift each image horizontally and vertically by 0 or 1 pixel. The batch size for training is set to 128, and the Adam optimizer (Kingma & Ba, 2014) is used with default parameters: α = 0.001, β1 = 0.9, β2 = 0.999,and ε = 10−8. A weight decaying setting of 4e-4 is used for both the LeNet-300-100 and LeNet-5 architectures in corresponding cases. At least 20 runs with random seeds are carried out for each experiment case.
Table 1 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-300-100 architecture. The output layer uses a fixed pruning setting of 0.75, and the hidden layers use pruning settings θk = 1.0 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 1 are obtained using the largest pruning threshold.
Compared with LLR representation, results using softmax normalization have higher training errors and test errors. This indicates that LLR representation can mitigate the overfitting issues and improve accuracy in both training and testing. Figure 6 (left) also compares the effects of overfitting between softmax normalization and LLR representation with the LetNet-300-100 architecture. Using both LLR representation and weight decaying can yield more efficient networks than the iterative method from Han et al. (2015).
Table 2 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-5 architecture. Fully-connected layers use a fixed pruning setting of 1.25. For convolutional layers, the pruning settings are set as θk = 0.5+0.1×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 2 are obtained using the largest pruning threshold. Weight sharing and inherent structural regularization of CNN further mitigate the overfitting issues in training. Using LLR representation and weight decaying, the accuracy of the pruned network is even better than the accuracy in training and testing. The snapshot-based method generates efficient networks with 31K parameters and the state-of-the-art
performance, better than the ones using the iterative method from Han et al. (2015). Figure 6 (right) also compares the effects of overfitting between softmax normalization and LLR representation.
For comparison purpose, the results using the AdamW algorithm from Loshchilov & Hutter (2018) are summarized in Table 3 and 4 for the LeNet-300-100 and LetNet-5 architectures, respectively. The accuracy differences between training and testing are always larger than previous results using weight decaying and the original ADAM algorithm. The results show that using the AdamW algorithm may generate overparameterized networks. Thus, the snapshot-based pruning method can be a valuable tool for evaluating optimization algorithms.
C.2 CIFAR-10 CLASSIFICATION
The CIFAR-10 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32× 32 color images drawn from 10 classes. The data batch size of 128 is used for training. For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. Weight decaying settings of 6e-4 and 5e-4 are used for the ResNet and DenseNet architectures, respectively. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 10 runs with random seeds are carried out for each experiment case.
The top-1 error rates, total number of parameters after pruning, and compression ratio for CIFAR-10 dataset with the ResNet architectures are summarized in Table 5. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5+0.05×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 5 are
obtained using the largest pruning threshold. For all cases, using LLR representation yields better performance and less parameters after pruning. For the ResNet-56 case, using LLR representation with weight decaying reduces the total number of parameters to about 200K without significant loss of performance.
The results in Table 6 show better performance for DenseNet architectures as compared with the ResNet architectures. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 6 are obtained using the largest pruning threshold.
For DenseNet-60 with less than 90K parameters, the performance is comparable to ResNet-56 with 200K parameters. Therefore, overfitting issues with the DenseNet architecture are less prominent than the ResNet architecture. Figure 7 summarize the efficiency trade-offs for both ResNet and DenseNet architectures. Compared with ResNet architecture, the initial DenseNet model sizes are larger, the effects of weight decaying are more prominent than the softmax normalization, and the difference between softmax normalization and LLR representation for DenseNet is smaller.
C.3 CIFAR-100 CLASSIFICATION
The CIFAR-100 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32 × 32 color images drawn from 100 classes. The 100 classes are grouped into 20 superclasses. The data batch size of 128 is used for training.
For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. A weight decaying setting of 5e-4 is used for both the ResNet and DenseNet architectures. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 20 runs with random seeds are carried out for each experiment case.
The top-1 error rates for trained and pruned networks, total number of parameters after pruning, and compression ratio for CIFAR-100 dataset are summarized in Table 7 and 8. Fully-connected
layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 7 and 8 are obtained using the median pruning threshold. | 1. What is the main contribution of the paper regarding neural architecture learning?
2. What are the strengths and weaknesses of the paper's experimental setup and results?
3. Do you have any questions or concerns about the paper's claims and conclusions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific aspects of the paper that the reviewer would like to see improved or expanded upon? | Summary Of The Paper
Review | Summary Of The Paper
This paper addresses the problem of learning efficient neural architectures: the addressed issue is relevant and of enough interest to this audience. The article is well written and understandable.
Review
At p1, abstract "weight decaying and structural regularization can also effectively reduce overfitting": parameter regularization (often, L2) is well known no avoid a model to overfit on the training samples. Still in the abstract, teh authors claim that "The second contribution is that we discovered a well-trained network without overfitting can be effectively pruned using a simple snapshot-based procedure – pruning unimportant weights and connections first, and simply adjust remaining non-weight parameters using the BP algorithm." : it is not clear to me where in the experimental section this claim is proven. At p 2 "Overfitting may improve training accuracy, but it can cause overparameterization": indeed, it is known that some architectures need to be overparametrized to learns specific datasets, however once the task is learned, pruning is commonly used to reduce the model footprint. At p 5, sec 2.5 "Furthermore, the underlying principles regarding the retraining procedure is not well understood": this statement needs t be better motivated, as it is is groundless. Either motivate this claim or tone it down. Concerning the experimental setup in Sec 3, its main problem is that it is only lightly discussed, so it is hard to express an opinion of merit on these numbers. Concerning Fig 3.1, I assume the authors indicate with "Softmax" a reference scheme without weight decay and traditional softmax as activation function. I also assume that LLR is as Softmax, yet with LLR in place of SM and "Weight Decay" is akin to Softmax, yet with an (L2?) regularization component in the cost function J. About Fig1-left, it is true that LLR yields lower deviation than SM, however the difference is very small: I d like to see corresponding deviations here. About Fig1-right, the result of the experiment is correct according to existing literature and my previous experiment, however it is as correct as predictable, so it does not add much to the existing body of knowledge. Concerning Fig 2, it is not clear how the authors measure the model sparsity, as they do not seem to perform any thresholding as for example in Han et al.: did the authors apply some equivalent form of thresholding, and in the case with what threshold value ? And the dots in the graph, what do they represent ? Epochs of training ? That is confusing as I thought the proposed approach was one-shot ? I will assume the authors bin the wights and drop those below a threshold. The most interesting result seems to lie in Fig 2-right, where we see that when weight decay is employed, the LLR yields better test accuracy for marginally lower sparsity only: I would like to see tis result discussed more in depth. Concerning Fig 3 (including the updated Densenet), also here the graphs clearly show some advantages of LLR over SM, that is promising, however without a bit of context on this experiment it is hard to judge on its merit. And why two different regularization factors of 5*10-4 and 10^-4 ? The authors shall focus more on this aspect of their work. |
ICLR | Title
On the Efficiency of Deep Neural Networks
Abstract
The efficiency of neural networks is essential in large-scale deployment scenarios such as mobile applications, internet of things, and edge computing. For a given performance requirement, an efficient neural network should use the simplest network architecture with a minimal number of parameters and connections. In this paper, we introduce a framework to analyze and obtain efficient neural networks. In summary, our main contributions are three-fold. Our first contribution is the subnetwork hypothesis to address overfitting issues and help explain the effectiveness of several key techniques in training efficient networks: 1) softmax normalization in output layers may be one major cause of overparameterization; 2) using log likelihood ratio representation in output layers can reduce overfitting; 3) weight decaying and structural regularization can also effectively reduce overfitting. The second contribution is a simple and effective snapshot-based procedure to prune a well-trained network that minimizes overfitting – pruning unimportant weights and connections first, and simply adjust remaining non-weight parameters using the backpropagation algorithm. Besides, the snapshot-based pruning method can also be used to evaluate the efficiency of trained networks. Finally, we hypothesize that there exist lower bounds of the total number of bits for representing parameters and connections regarding performance metrics for a given optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, it is also important to explore the trade-offs between accuracy and the total number of representation bits, when comparing different network architectures and implementations.
1 INTRODUCTION
Deep learning has achieved tremendous success in large-scale machine learning systems, such as big-data analytics (Najafabadi et al., 2015), billion-parameter generative models for natural language processing (Brown et al., 2020; Radford et al., 2019), and computer vision for self-driving cars (Grigorescu et al., 2020). A general trend for recent success is the use of neural networks of ever-increasing model sizes and their exponentially increasing computation power requirement. Training these gigantic neural network models require tens of thousands parallel computing units inside dedicated computer clusters with extremely high transient storage capacity and data synchronization bandwidth. Consequently, some of the state-of-the-art models are only accessible to very few researchers in the machine learning community.
On the other hand, large-scale deployment of machine learning applications in low-power scenarios, such as mobile applications, internet-of-things (IoT), and edge computing, has put more stringent requirements on the efficiency of neural network models. For a given problem and performance metrics, efficient neural network models should have a minimal number of weights and connections, simple network topology and architecture suitable for low-power computing devices, and low data bandwidth and transient storage requirements. It is important to investigate the model efficiency problem to bridge the performance gap between petascale high-end models and low-power neural architectures for large-scale deployment. However, methods and principles for obtaining efficient deep neural networks have not yet been thoroughly studied.
In this paper, we introduce a framework to analyze and obtain efficient deep neural networks. Especially, we identify several key issues in training efficient deep neural networks and propose a new model compression procedure to prune redundant weights and connections. One important in-
sight of our study is the high correlation between overfitting and model efficiency. Overfitting may improve training accuracy, but it can cause overparameterization. In Section 2, we show that softmax output layers can introduce non-deterministic effects to the backpropagation algorithm, yielding redundant subnetworks with exploding numbers of parameters. To solve this problem, we propose the log likelihood ratio (LLR) representation for output layers. We also investigate potential mechanisms for weight decaying and structural regularization to reduce overfitting. Furthermore, we propose a simple and effective snapshot-based pruning procedure to obtain efficient deep neural networks. We empirically validate this novel approach in Section 3 using various deep learning architectures including LeNet, ResNet, and DenseNet, on the MNIST, CIFAR-10, and CIFAR-100 datasets. Based on the empirical results, we further discuss the model efficiency regarding information cost of model representation. Section 4 reviews prior work in regularization, overfitting, and model compression, followed by Section 5 that concludes the paper.
2 EFFICIENT NEURAL NETWORKS
A fundamental assumption of our analysis is that a complex neural network can be decomposed into subnetworks that are responsible for different operation modes. In other words, the complex nonlinear function of a neural network can be decomposed into groups of sub-functions. Each group of sub-functions represents one mode of operation. In this way, the efficiency of a neural network highly depends on the composition and correlation between these groups of sub-functions. Thus, overfitting may be viewed as forming redundant subnetworks that reduce the efficiency of trained networks.
In this section, we shows that a critical step for obtaining efficient neural networks is to eliminate redundant subnetworks by minimizing overfitting. We first analyze the overfitting issues caused by redundant subnetworks and describe potential mitigating mechanisms. Several hypotheses presented in this section will also be empirically validated using experiments in Section 3. Finally, we introduce a novel snapshot-based procedure to obtain efficient deep neural networks by pruning their unimportant weights and connections. This procedure is also used to analyze the efficiency of trained networks.
2.1 SOFTMAX NORMALIZATION
The softmax function, a.k.a. softargmax, is a normalization function often used as the last activation function of a neural network (Bishop, 2006). Let Z = {z0, z1, · · · , zi, · · · } represents the input vector, the softmax output vectorQ = {q0, q1, · · · , qi, · · · } is defined as
qi = exp(zi)∑ k exp(zk)
(1)
where qi ∈ [0, 1] and ∑ i qi = 1. Thus, the normalized output vector can be interpreted as marginal probabilities. The softmax output can be naturally combined with the cross entropy function J = − ∑ i pi log qi, where pi is the target probability. The derivative of J with respect to zi takes a simple form of qi − pi (Goodfellow et al., 2016). The simple probabilistic interpretation and derivative computation make the combination of softmax normalization and cross entropy loss a pervasive choice for multinomial classification problems. However, potential issues using softmax normalization with the backpropagation (BP) algorithm has not been fully investigated.
Suppose a neural network G can be decomposed into two or more smaller subnetworks G = {G0,G1, · · · ,Gm, · · · } with the same feature input X. The final activation Z is the superposition of the subnetwork activation before the softmax normalization in the output layer
Z = M∑ m=0 Ym = M∑ m=0 fm(X) (2)
where fm is the non-linear function representing subnetwork Gm. The decomposition is done according to the final activation without considering intermediate hidden layers. The softmax normalization operation has the following properties regarding the relationship between subnetwork activations (see Appendix A).
1. If the subnetwork activations are linear offset versions of each other, such that Y0 = Y1 − β1 · · · = Ym − βm · · · , the normalization result of the whole network is equivalent to applying the softmax function to the activation of any subnetwork scaled by M : Q = softmax(MYm). Note that the offset between subnetwork activation Ym has no impact on the softmax output. If the activations Ym are linearly semi-correlated, the generalized softmax property is applicable, i.e., that Q ≈ softmax(MYm).
2. If the subnetwork activations are scaled versions of each other, such that Y0 = α1Y1 · · · = αkYk · · · and 1 ≥ α1 ≥ α2 ≥ · · · ≥ αk · · · , the normalization operation is equivalent to applying the softmax function to the scaled principal subnetwork: Q = softmax(SY0), where S = 1+ α1 + α2 + · · · . The softmax normalization allows proportional integration of information. A single subnetwork that has very strong activation (higher prediction probabilities) can dominate over other subnetworks with weak activations. If there are no dominant subnetworks, the total number of contributing subnetworks may be large and the whole network tends to be overparameterized.
In short, the softmax function can act as a super combinator for different modes of the neural network, summing and amplifying weak subnetwork activations. This could partially explain why deep neural networks are so expressive that they are suitable for diverse types of problems. However, when there are redundant subnetworks that produce linearly correlated activations, the softmax normalization function make them indistinguishable from each other. The linearly correlated subnetworks potentially lead to overfitting and overparameterization. We have the following hypothesis regarding the effects of such redundant subnetworks: Hypothesis 1: For deep neural networks, the existence of redundant subnetworks combining with softmax normalization can lead to overfitting and overparameterization when training with the backpropagation algorithm.
The derivative of the cross entropy loss is linear with regard to the softmax output Q and target P , and softmax normalization makes it impossible to differentiate between the effects of different subnetworks, the BP algorithm thus will fine-tune all the parameters without penalizing any individual subnetwork. Therefore, the initialization of weights may create redundant subnetworks that have non-deterministic effects on the training process. For example, Mishkin & Matas (2016) demonstrated that initialization of weights can affect test accuracy. Such behaviors and the existence of redundant subnetworks will be validated from empirical results in Section 3.
2.2 LLR REPRESENTATION
The softmax normalization is mainly used to convert neural network outputs to probabilities. However, the softmax normalization allows linearly correlated subnetwork activations and potentially introduces overfitting. Therefore, it is desirable to avoid softmax normalization in output layers. It turns out that using the log likelihood ratio (LLR) representation in output layers can avoid normalization and overfitting issues. Given a binary random variable X and P1(X) = {probability X is true}, the LLR for X can be defined as
LLR(X) = log P1(X)
1− P1(X) (3)
Since neural networks can model arbitrary non-linear functions, we can adopt LLR representation for each component of the outputs Y and target labels T . For both multi-class and multi-label classification, the problem can be regarded as multiple binary regression problems adopting the LLR representation for each class. Therefore, output normalization across different classes is not needed, but loss functions need to be changed accordingly – we introduce the bipolar softplus (BSP) loss function as defined in Appendix B.1. We demonstrate that the LLR representation combined with the BSP loss function does not need normalization and avoids the introduction of redundant subnetworks. The choice of loss functions is not mandatory and there may be better alternative loss functions. The optimization of loss functions will be addressed in future study. In this paper, we use empirical results to demonstrate the effectiveness and behavior of this novel scheme. We introduce the following hypothesis regarding the LLR representation: Hypothesis 2: For classification problems with deep neural networks, using the LLR representation for the output layer and the target labels can reduce overfitting and avoid overparameterization compared with softmax normalization.
It is worth emphasizing that the LLR representation has clear physical meanings, which could help the explainability of neural networks. LLR values are symmetrical and centered around zero, which can be regarded as a natural normalization point. Note that LLR values have range (−∞,+∞) and a large magnitude means higher confidence in prediction. Thus, by controlling the LLR magnitude of the target labels, we can introduce regularization to network outputs.
2.3 WEIGHT DECAYING
In previous discussion, potential issues in normalization and representation are analyzed by decomposing the activation in the output layer. In a similar fashion, we can also decompose the weights and activation in each hidden layer as follows.
Suppose feature inputs of a layer can be represented as X = ∑M−1 m=0 Xm, and its weight matrix
W can be decomposed as W = ∑N n=0Wn, where Wn is non-zero weight components, then the activation Z can be decomposed as
Z = M−1∑ m=0 N−1∑ n=0 Am,n = M−1∑ m=0 N−1∑ n=0 (XmWn +B) (4)
where B is the bias vector. When the rectified linear unit (ReLU) non-linear function is adopted, only those activations larger than zero in Eq. 4 are effective. For a given feature input Xk, if there are multiple Ak,n components that have all positive elements, the weight components Wn can be effectively combined to reduce the total number of parameters. The redundant weights components may be different for different input features. The existence of such redundant weight components may become the source of overfitting and overparameterization. The large ones of these redundant Ak,n components also tend to be working in the linear regime of the ReLU function, which effectively reduces the non-linear behavior of the network. To reduce overfitting and redundancy, the weights should have relatively small magnitudes working in the non-linear regime of the ReLU function, hence we have the following hypothesis regarding weight decaying (L2 regularization). Hypothesis 3: Limiting the magnitude of weights using weight decaying can reduce overfitting and overparameterization in deep neural networks when the ReLU activation function is used.
This hypothesis could explain why regularizing weights is an effective technique to improve training performance. Weight decaying should also be separated from loss regularization, which was first discussed in Loshchilov & Hutter (2018). In our experiments, however, their AdamW algorithm turns out to improve the training accuracy by increasing overfitting and overparameterization as shown in Section 3.2.
2.4 STRUCTURAL REGULARIZATION
The underlying assumption in the analysis of weight decaying is that the outputs of fully-connected subnetworks can be freely superimposed with each other. Thus, if the combination of subnetworks are restricted, overfitting issues could be mitigated. A common technique for this purpose is structural restriction of the subnetworks. Some examples are listed in the following.
1. Structural pruning - Various techniques to selectively remove connections from the whole network in training have been proposed and shown to reduce overfitting, such as Wan et al. (2013). In a sense, stochastic gradient descent (SGD) can also be regarded as adopting random structural pruning.
2. Weight sharing - By sharing weights and forcing regular network structures, neural networks become more effective and easier to train. Convolutional neural network (CNN) can be regarded as a prominent type which is often used as feature extraction layers.
3. Micro-architectural design - By adopting certain topology patterns between or within neural network layers, the resulting networks are confined to subsets of fully-connected networks, hence their overfitting issues are mitigated. Skip connections, for example, have been show to improve training speed and performance (He et al., 2016; Huang et al., 2017).
Many existing optimization techniques for training neural networks could be partially explained and further analyzed using the subnetwork analysis. The underlying principle is that by reducing
the initial functional space, the optimization problem becomes less difficult and easier to converge, which explains why micro-architecture design can have significant impact on the performance of neural networks. In Section 3, the effects of structural regularization are partially demonstrated by comparing the efficiency of different network architectures on the same dataset.
2.5 SNAPSHOT-BASED PRUNING
It is well known that neural networks can be made more efficient in terms of computation and storage requirements by pruning some of the unimportant weights. For deep neural networks, the iterative pruning and retraining procedure in Han et al. (2015) has been used for generating efficient neural networks for low-power applications. However, the iterative procedure requires extra computing power and processing time. Furthermore, the iterative procedure often requires manually finetuning pruning thresholds. We discuss two important aspects of pruning neural networks in the following.
1. Important weights - Deciding which weights are important is the first key issue. In general, weights with smaller magnitudes are considered unimportant and can be pruned, but this may not always be the case for different types of components in various network architectures. For example, shared weights in convolutional layers may be more important than weights in fully connected layers. Even the importance of weights in the same layer may not be correlated with their magnitudes.
2. Retrain requirement - After pruning its weights and connections, a pruned neural network usually needs to be adjusted. It is not clear which aspects of the network need to be modified. For the iterative pruning and retrain process, the weights and biases between initial and final iterations may be completely different so that it is hard to analyze the iterative retraining mechanism.
By analyzing experimental data from extensive empirical studies with different datasets and various architectures, we have the following observation regarding these two key aspects for pruning neural networks.
1. Weight distribution - If the neural network is well trained such that overfitting is minimized, the weight magnitude distribution correlates better with the weights’ importance. In other words, weights with smaller magnitudes around zero can be effectively pruned. Different layers and types of components may need different pruning thresholds but they can be easily adjusted using macro network attributes.
2. Essential network - The important weights together with their corresponding connections define the essence of a trained network; therefore, they should be kept unchanged as a snapshot. Only the biases need to be adjusted. Other non-weight parameters, such as batch normalization parameters, may also need to be adjusted as well.
In short, iterative pruning and retraining may not be necessary when a neural network is well-trained. The first step in obtaining efficient neural networks is to adopt efficient training techniques, such as LLR representation, weight decaying, and structural regularization. After pruning unimportant weights and connections from trained snapshots, the next step is to simply adjust remaining nonweight parameters using the BP algorithm. For most architectures, because most of the parameters are connection weights, adjusting the non-weight parameters requires much fewer iterations than the initial training process – usually only a few epochs are enough.
Conversely, the efficiency and quality of a trained network can be evaluated with the effectiveness of parameter pruning. Refinement of optimization algorithms may also be further examined using this pruning procedure. If a network is overparameterized, the performance of its pruned versions deteriorates dramatically as the total number of parameters is reduced. The key discovery here is the high correlation between overfitting and the efficiency of neural networks.
If trained neural networks are not efficient enough initially, combining iterative techniques with the proposed snapshot-based pruning method could be beneficial. For very large networks, it should be noted that using all the methods analyzed in this section may not be enough to yield efficient deep neural networks using a single-shot training procedure.
3 EXPERIMENTS
We empirically analyze the model efficiency trade-offs in deep neural networks as well as the overfitting issues in training neural networks to validate the subnetwork assumption and the analysis of various mitigating methods in previous section. We also demonstrate the effectiveness of the proposed snapshot-based pruning procedure in obtaining and evaluating efficient neural networks.
3.1 METHODOLOGY
The trade-offs between accuracy and total number of parameters are analyzed with various architectures and datasets using the following procedure: 1) the neural network is first trained using different hyper-parameter settings and output representation; 2) the trained networks are pruned using different threshold settings, and non-weight parameters are retrained using the BP algorithm; 3) test accuracy and total number of parameters of the pruned networks are averaged over at least 10 different experiment runs using the same pruning settings.
The pruning thresholds are set according to the standard deviation of weights’ magnitudes. Two different thresholds are used for convolutional layers and linear layers, respectively. For example, the threshold value for convolutional layers is calculated by multiplying the pruning setting with the standard deviation of weights in all convolutional layers. Weights with magnitudes lower than the threshold value are pruned. In all cases, simple linearly-spaced pruning settings are used without further fine-tuning. However, optimization of the pruning settings is possible by taking into account the structural attributes of given network architectures.
Other hyper-parameter settings and detailed analysis of the results are included in Appendix C. In the following, we focus on the efficiency trade-offs using different architectures on several datasets.
3.2 MNIST CLASSIFICATION
We first conduct experiments on the MNIST dataset (LeCun et al., 1998) using the LeNet-300-100 and LeNet-5 architectures (LeCun et al., 2015). In Figure 1 (left), the distribution of weights when training with LLR representation is compared with the case of softmax normalization. Using LLR representation yields a better distribution of weights – the probability of small weights around zero is higher and these weights can be pruned with less impact on the performance. Furthermore, weight decaying can push the weights aggressively towards zero as shown in Figure 1 (right). While softmax normalization primarily affects output layers, the ReLU function may cause overfitting in all layers. Therefore, the effect of weight decaying is more prominent and effective as shown in Figure 1. This observation is consistent with the analysis and hypotheses from Section 2.
Figure 2 shows the trade-off curves for test errors vs. total number of effective parameters for all experiment results on the MNIST dataset. Each curve represents 20 trained networks with the same training settings, each point represents the average total number of weights and average top-1 errors of the pruned networks for each of the 10 different pruning settings. We can see that using LLR representation instead of softmax normalization can reduce the total number
of parameters for the same accuracy requirement. Using weight decaying also significantly improve the efficiency of the trained networks. Using both methods yields the most efficient neural networks with better performance than the ones using the iterative pruning approach from Han et al. (2015), as shown in Table 1-2 in Appendix C.1. Compared with fully-connected networks, convolutional neural networks show better performance partially due to inherent structural regularization.
We found that the AdamW optimizer with weight decaying may increase training accuracy by increasing overfitting and yield less efficient networks, as demonstrated in Figure 3. Compared with previous results, the optimal pruned model sizes are dramatically increased and using weight decaying does not improve the efficiency of trained networks.
3.3 CIFAR-10 CLASSIFICATION
Several ResNet (He et al., 2016) and DenseNet (Huang et al., 2017) architectures are used for experiments on the CIFAR-10 dataset. We compare ResNet with 20/32/56 layers to DenseNet with 40/60/100 layers and a growth rate k = 12. To reduce the total number of parameters, bottleneck layers are enabled for DenseNet. Further comparison and analysis of the overfitting issues are provided in Appendix C.2.
Figure 4 summarizes the experiment results on the CIFAR-10 dataset. A weight-decaying setting of 1e−4 is used with softmax normalization, which is the default setting to obtain the best accuracy results without pruning. For comparison purpose, experiments for LLR representation use a weight-decaying setting of 5e−4. Although using softmax yield the best training accuracy, the trained networks are overparameterized compared with those using LLR representation.
Therefore, the trade-off curves can be used to judge the efficiency of trained networks. Curves closer to the bottom-left region on the figure represent more efficient networks. Both ResNet and DenseNet architectures show similar trends in terms of efficiency. The trade-off curves for the same architecture with different initial model sizes seem to be bounded by a single theoretical curve. In the energy efficient region with small number of parameters, the error rate goes down rapidly with a small increase in the number of parameters; while in the high accuracy region, small
increases in accuracy require exponentially increasing numbers of parameters. In this regard, the ResNet-32 and the DenseNet-60 architectures may offer better alternative trade-offs in efficiency.
3.4 CIFAR-100 CLASSIFICATION
Figure 5 summarizes the experiment results using different ResNet and DenseNet architectures on the CIFAR-100 dataset. For all experiments using either softmax normalization or LLR representation, the same weight-decaying settings are used. In terms of efficiency trade-offs, we see a similar trend as before: linear increase in accuracy tends to require exponential increase in network capacity. We notice that the difference between softmax normalization and LLR representation is less prominent for the DenseNet architecture. One possible reason is that the BSP loss function is not yet fully optimized for the DenseNet architecture. Another possible reason is that the effects of weight decaying are more prominent than softmax normalization for the DenseNet architecture with larger initial model sizes.
4 RELATED WORK
Historically, deep neural networks using sigmoid or hyperbolic tangent activation functions were difficult to train using backpropagation (Rumerlhar, 1986) due to the vanishing gradient problem (Glorot & Bengio, 2010). The introduction of ReLU activation function (Nair & Hinton, 2010) greatly improves training speed for deep learning, yielding improved prediction accuracy in many new applications. However, using the ReLU activation function also tends to introduce overfitting issues as shown in this paper.
Regularization using modified loss functions can alleviate overfitting but with limited effects. Data augmentation is another method to reduce overfitting and improve generalization performance. Dropout, i.e., randomly selected units are dropped during training, was introduced in Hinton et al. (2012) and Srivastava et al. (2014) as an effective method to prevent overfitting. This idea was
extended to randomly dropping connections in Wan et al. (2013). Batch normalization is another method to reduce overfitting and improve training speed. Nevertheless, it is not completely clear how in principle these methods work, and they still can not fully eliminate overfitting issues in deep neural networks.
Overfitting is also related to the size of a neural network. Excessively large networks tend to introduce overfitting, and vice versa. It is also desirable to minimize model sizes for processing speed and systematic scaling purposes. A straightforward way for compressing over-parameterized neural networks is to prune trivial weights and retain only important connections, which is similar to the development of mammalian brain (Rauschecker, 1984). Pruning unimportant weights and connections after training is a common way to obtain efficient neural networks. Early work in Hassibi & Stork (1993); Hassibi et al. (1994); LeCun et al. (1989) uses the statistics from backpropagation to trim trained networks.
Recently, Han et al. (2015) proposed an iterative pruning and re-training procedure for efficient model compression. Similarly, pruning filters were proposed for convolutional networks in Li et al. (2016). However, iterative pruning and re-training is generally difficult, requiring extra processing time and resources. Furthermore, the iterative pruning process is opaque and requires try-and-error in selecting pruning thresholds for parameters in different layers. The lottery ticket hypothesis from Frankle & Carbin (2018) tries to explain why the iterative pruning procedure can work, but the empirical results therein are not conclusive enough.
Alternatively, one-shot pruning techniques try to train sparse neural networks directly without iterative operations (Lee et al., 2019; Zhang & Stadie, 2019; Wang et al., 2020). However, Liu et al. (2019) observe that previous state-of-the-art pruning techniques may not provide better performance compared with randomly initialized networks. Their observations could be partially explained using our subnetwork analysis on structural regularization effects.
5 DISCUSSION AND FUTURE WORK
In this paper, we identify several important issues affecting overfitting in training deep neural networks. The key finding is that reducing overfitting is critical for obtaining efficient neural networks. It is demonstrated with several datasets and network architectures that a simple snapshot-based pruning procedure can generate efficient deep neural networks. However, more empirical validation results using other neural network architectures and larger datasets are required to further validate the proposed approach. Quantizing the parameters will further compress neural network models, which is not considered here for brevity but could be a natural extension in future work.
The snapshot-based retrain method can also be useful in real-world applications, where we only need to store pruned weights and connections, while biases and other optimization parameters can be restored using new datasets. This could be a very important optimization in cloud and edge computing applications. For transfer learning, neural networks trained with old datasets may be effectively retrained using new datasets, given that the underlying neural models are similar in nature.
We further analyze the efficiency trade-offs in training deep neural networks. For a given optimization problem with given objective and dataset, we should consider structural information, in additional to weights, as representation cost of trained networks. For the small-scale network architectures used in this study, few extra parameters are needed to specify the network topology and connections. However, for large-scale networks, the parameters for describing the network topology and connections should also be included in the representation cost of models. When we compare neural network performance, domain-specific knowledge for designing network architectures should be considered as additional information. We hypothesize that there exist lower-bounds of total number of bits for representing parameters and connections with regard to given performance metrics for an optimization problem. Rather than focusing on improving the sole accuracy metric with more complex network architectures, we should also explore the trade-offs between accuracy and total number of representation bits when comparing different network architectures and implementations.
Several hypotheses regarding training efficient deep neural networks are put forward and empirically validated with experiments. Although rigorous proofs are not provided, we hope that they will encourage further discussion and research efforts on the trade-offs between model performance and complexity.
6 REPRODUCIBILITY STATEMENT
The authors of this paper regard it critical to ensure all empirical results in this paper can be consistently reproduced. For each experiment case with different parameters and optimization settings, the results are generated with at least 10 runs with different random seed initialization. We also crosscheck our results with different references. Furthermore, for some of the experiments, we have verified the results using several machine learning frameworks including PyTorch, TensorFlow, and Matlab Deep Learning Toolbox. Finally, we will publish the source codes for this work on GitHub and provide bug fixes and updates.
A THE SOFTMAX PROPERTIES
A.1 PROOF OF THE SOFTMAX PROPERTY
Given two linearly correlated vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 is a scalar, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1)∑ k exp(2y0,k + β1)
= exp(β1) exp(2y0,n) exp(β1) ∑ k exp(2y0,k) = exp(2y0,n)∑ k exp(2y0,k)
If we add a third vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1 + y2,n)∑ k exp(2y0,k + β1 + y2,k)
= exp(3y0,n + β1 + β2)∑ k exp(3y0,k + β1 + β2)
= exp(β1 + β2) exp(3y0,n) exp(β1 + β2) ∑ k exp(3y0,k) = exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linear offset versions of each other, such that Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) = softmax(MY0) = softmax(MYm)
A.2 PROOF OF THE GENERALIZED SOFTMAX PROPERTY
Given two vectors Y0 = (y0,0, y0,1, · · · , y0,K−1), Y1 = (y1,0, y1,1, · · · , y1,K−1) of length K such that Y1 = Y0 + β1, where β1 = (β1,0, β1,1, · · · , β1,K−1). Without any loss of generality, we assume that Y0,0 ≤ Y0,1 ≤ · · · ≤ Y0,K−1 = Ymax. Define the maximal variation of β1 as δ1 such that |β1,k − β1,n| ≤ δ1 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. If δ1 is insignificant relative to Y0, i.e., that exp(δ1) = o(exp(Ymax)), (5) where o(·) is the little-o notation, we define Y0, Y1 as linearly semi-correlated vectors. Then, each component of the softmax normalization vectorQ = softmax(Y0 + Y1) can be calculated as
qn = exp(y0,n + y1,n)∑ k exp(y0,k + y1,k)
= exp(2y0,n + β1,n)∑ k exp(2y0,k + β1,k)
= exp(β1,n) exp(2y0,n) exp(β1,n) ∑ k exp(2y0,k + β1,k − β1,n) = exp(2y0,n)∑
k exp(2y0,k + β1,k − β1,n) .
Note that the denominator of qn is mainly determined by the largest components of Y0, and thus we have the following approximation
qn ≈ exp(2y0,n)∑
k exp(2y0,k + δ1)
≈ exp(2y0,n)∑ k exp(2y0,k)
If we add a third linearly semi-correlated vectors Y2 = (y2,0, y2,1, · · · , y2,K−1) such that Y2 = Y0 + β2, where β2 = (β2,0, β2,1, · · · , β2,K−1) and |β2,k − β2,n| ≤ δ2 for any k, n ∈ Z and 0 ≤ k, n ≤ K − 1. Then each component of the softmax normalization vectorQ = softmax(Y0 + Y1 + Y2) can be calculated as
qn = exp(2y0,n + β1,n + y2,n)∑ k exp(2y0,k + β1,k + y2,k)
= exp(3y0,n + β1,n + β2,n)∑ k exp(3y0,k + β1,k + β2,k)
= exp(β1,n + β2,n) exp(3y0,n) exp(β1,n + β2,n) ∑ k exp(3y0,k + β1,k − β1,n + β2,k − β2,n) ≈ exp(3y0,n)∑ k exp(3y0,k + δ1 + δ2) ≈ exp(3y0,n)∑ k exp(3y0,k)
Using generalization for Z = ∑M−1
0 Ym, where Ym are linearly semi-correlated of each other and Y0 = Y1 − β1 = Y2 − β2 = · · · = YM−1 − βM−1, we have
softmax(Z) ≈ softmax(MY0) ≈ softmax(MYm)
Note that the above relation holds as long as the variations in βm is insignificant according to (5), while the magnitudes of βm do not matter.
B LOSS FUNCTIONS FOR LLR REPRESENTATION
B.1 BIPOLAR SOFTPLUS LOSS
Given a set of N neural network outputs {Yn} and corresponding targets {Tn}, the bipolar softplus (BSP) loss is defined as
BSP(Y, T ) = 1
βN N−1∑ n=0 log(1 + e−βsgn(Tn)Yn) (6)
where β is a constant and sgn(x) returns the sign of x as
sgn(x) = 1, x > 0 0, x = 0
−1, x < 0 (7)
C EXPERIMENT SETTINGS AND DETAILED RESULTS
C.1 MNIST CLASSIFICATION
The MNIST dataset of handwritten digits has a training set of 60,000 examples and a test set of 10,000 examples. Each image contains 28× 28 monochrome pixels for one digit. The pixel values are converted to range (0, 1) with dataset normalization.
Two architectures are used in the experiments: 1) the LeNet-300-100 is a three-layer fully connected network with 300 and 100 hidden nodes, 2) the LeNet-5 architecture has two convolutional layers with 20 and 50 filters and two fully connected layers with 800 and 500 hidden nodes.
Data augmentation is used to randomly shift each image horizontally and vertically by 0 or 1 pixel. The batch size for training is set to 128, and the Adam optimizer (Kingma & Ba, 2014) is used with default parameters: α = 0.001, β1 = 0.9, β2 = 0.999,and ε = 10−8. A weight decaying setting of 4e-4 is used for both the LeNet-300-100 and LeNet-5 architectures in corresponding cases. At least 20 runs with random seeds are carried out for each experiment case.
Table 1 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-300-100 architecture. The output layer uses a fixed pruning setting of 0.75, and the hidden layers use pruning settings θk = 1.0 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 1 are obtained using the largest pruning threshold.
Compared with LLR representation, results using softmax normalization have higher training errors and test errors. This indicates that LLR representation can mitigate the overfitting issues and improve accuracy in both training and testing. Figure 6 (left) also compares the effects of overfitting between softmax normalization and LLR representation with the LetNet-300-100 architecture. Using both LLR representation and weight decaying can yield more efficient networks than the iterative method from Han et al. (2015).
Table 2 summarizes the top-1 error rates, total number of parameters after pruning, and compression rate for different methods with the LetNet-5 architecture. Fully-connected layers use a fixed pruning setting of 1.25. For convolutional layers, the pruning settings are set as θk = 0.5+0.1×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 2 are obtained using the largest pruning threshold. Weight sharing and inherent structural regularization of CNN further mitigate the overfitting issues in training. Using LLR representation and weight decaying, the accuracy of the pruned network is even better than the accuracy in training and testing. The snapshot-based method generates efficient networks with 31K parameters and the state-of-the-art
performance, better than the ones using the iterative method from Han et al. (2015). Figure 6 (right) also compares the effects of overfitting between softmax normalization and LLR representation.
For comparison purpose, the results using the AdamW algorithm from Loshchilov & Hutter (2018) are summarized in Table 3 and 4 for the LeNet-300-100 and LetNet-5 architectures, respectively. The accuracy differences between training and testing are always larger than previous results using weight decaying and the original ADAM algorithm. The results show that using the AdamW algorithm may generate overparameterized networks. Thus, the snapshot-based pruning method can be a valuable tool for evaluating optimization algorithms.
C.2 CIFAR-10 CLASSIFICATION
The CIFAR-10 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32× 32 color images drawn from 10 classes. The data batch size of 128 is used for training. For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. Weight decaying settings of 6e-4 and 5e-4 are used for the ResNet and DenseNet architectures, respectively. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 10 runs with random seeds are carried out for each experiment case.
The top-1 error rates, total number of parameters after pruning, and compression ratio for CIFAR-10 dataset with the ResNet architectures are summarized in Table 5. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5+0.05×k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 5 are
obtained using the largest pruning threshold. For all cases, using LLR representation yields better performance and less parameters after pruning. For the ResNet-56 case, using LLR representation with weight decaying reduces the total number of parameters to about 200K without significant loss of performance.
The results in Table 6 show better performance for DenseNet architectures as compared with the ResNet architectures. Fully-connected layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 6 are obtained using the largest pruning threshold.
For DenseNet-60 with less than 90K parameters, the performance is comparable to ResNet-56 with 200K parameters. Therefore, overfitting issues with the DenseNet architecture are less prominent than the ResNet architecture. Figure 7 summarize the efficiency trade-offs for both ResNet and DenseNet architectures. Compared with ResNet architecture, the initial DenseNet model sizes are larger, the effects of weight decaying are more prominent than the softmax normalization, and the difference between softmax normalization and LLR representation for DenseNet is smaller.
C.3 CIFAR-100 CLASSIFICATION
The CIFAR-100 datasets (Krizhevsky et al., 2009) consists of 50,000 images in the training set and 10,000 images in the test set. Each sample contains 32 × 32 color images drawn from 100 classes. The 100 classes are grouped into 20 superclasses. The data batch size of 128 is used for training.
For data augmentation, the standard mirroring and shifting scheme is used. The SGD optimizer is used with initial learning rate of 0.05 and momentum of 0.9, the learning rate is multiplied by 0.1 after 100 and 150 epochs. A weight decaying setting of 5e-4 is used for both the ResNet and DenseNet architectures. Initial training runs use 200 epochs, while for non-weight parameter adjustment after pruning only 20 epochs are needed. At least 20 runs with random seeds are carried out for each experiment case.
The top-1 error rates for trained and pruned networks, total number of parameters after pruning, and compression ratio for CIFAR-100 dataset are summarized in Table 7 and 8. Fully-connected
layers use a fixed pruning setting of 0.75. For convolutional layers, the pruning settings are set as θk = 0.5 + 0.05 × k, where k = 0, 1, · · · , 9. The total numbers of pruned parameters and compression ratio in Table 7 and 8 are obtained using the median pruning threshold. | 1. What is the focus of the paper regarding deep neural networks?
2. What are the strengths of the proposed approach, particularly in its structure and idea?
3. What are the weaknesses of the paper, especially regarding experimentation and justification of claims?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper tries to obtain an efficient deep neural network under the assumption that the deep neural network consists of several decomposed subnetworks. To this end, various methods are mentioned to remove redundant parts of the subnetworks. The methods include using the log-likelihood ratio layer instead of the softmax, analyzing the weight decay, and model structure restriction. The paper empirically verifies their hypothesis using three small datasets.
Review
** Strength **
The paper is well-structured and the main idea is clearly demonstrated.
The paper tries to interpret the deep neural network efficiency via subnetwork decomposition. The reviewer thinks this kind of approach is very interesting.
** Weakness **
The paper empirically verifies their idea. One concern is the experiments are performed on small datasets such as MNIST/CIFAR10/CIFAR100. The reviewer thinks the paper can be enhanced if the authors add some experiments with large datasets such as the ImageNet.
The paper claimed that the complex neural network can be decomposed into subnetworks. It would be great if the paper could clearly justify this argument. E.g., mathematical backgrounds or visualizing a loss surface of the model.
The authors emphasize in the main script that the efficiency of the trained network can be judged through the pruning, but related results were performed only for the MNIST dataset, and the result is also included only in the Appendix. It would be better if this part will be improved. |
ICLR | Title
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
Abstract
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. Recent work shows that an initial parameter set can be learned from a population of supervised learning tasks that enables a fast convergence for unseen tasks even when only a handful of instances is available (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type, and semantics of predictor and target variables. In this paper, we address the problem of meta-learning weight initialization across tasks with different schemas, for example, if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. In experiments on 23 datasets of the OpenML-CC18 benchmark, we show that Chameleon can successfully learn parameter initializations across tasks with different schemas, presenting, to the best of our knowledge, the first cross-dataset few-shot classification approach for unstructured data.
1 INTRODUCTION
Humans require only a few examples to correctly classify new instances of previously unknown objects. For example, it is sufficient to see a handful of images of a specific type of dog before being able to classify dogs of this type consistently. In contrast, deep learning models optimized in a classical supervised setup usually require a vast number of training examples to match human performance. A striking difference is that a human has already learned to classify countless other objects, while parameters of a neural network are typically initialized randomly. Previous approaches improved this starting point for gradient-based optimization by choosing a more robust random initialization (He et al., 2015) or by starting from a pretrained network (Pan & Yang, 2010). Still, models do not learn from only a handful of training examples even when applying these techniques. Moreover, established hyperparameter optimization methods (Schilling et al., 2016) are not capable of optimizing the model initialization due to the high-dimensional parameter space. Few-shot classification aims at correctly classifying unseen instances of a novel task with only a few labeled training instances given. This is typically accomplished by meta-learning across a set of training tasks, which consist of training and validation examples with given labels for a set of classes. The field has gained immense popularity among researchers after recent meta-learning approaches have shown that it is possible to learn a weight initialization across different tasks, which facilitates a faster convergence speed and thus enables classifying novel classes after seeing only a few instances (Finn et al., 2018). However, training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order. For that reason, most approaches demonstrate their performance on image data, which can be easily scaled to a fixed shape, whereas transforming unstructured data to a uniform schema is not trivial.
We want to extend popular approaches to operate invariant of schema, i.e., independent of order and shape, making it possible to use meta-learning approaches on unstructured data with varying feature spaces, e.g., learning a model from heart disease data that can accurately classify a few-shot task for diabetes detection that relies on similar features. Thus, we require a schema-invariant encoder that maps heart disease and diabetes data to one feature representation, which then can be used to train a single model via popular meta-learning algorithms like REPTILE (Nichol et al., 2018b).
We propose a set-wise feature transformation model called CHAMELEON, named after a REPTILE capable of adjusting its colors according to the environment in which it is located. CHAMELEON projects different schemas to a fixed input space while keeping features from different tasks but of the same type or distribution in the same position, as illustrated by Figure 1. Our model learns to compute a task-specific reordering matrix that, when multiplied with the original input, aligns the schema of unstructured tasks to a common representation while behaving invariant to the order of input features.
Our main contributions are as follows: (1) We show how our proposed method CHAMELEON can learn to align varying feature spaces to a common representation. (2) We propose the first approach to tackle few-shot classification for tasks with different schemas. (3) In experiments on 23 datasets of the OpenML-CC18 benchmark (Bischl et al., 2017) collection, we demonstrate how current meta-learning approaches can successfully learn a model initialization across tasks with different schemas as long as they share some variables with respect to their type or semantics. (4) Although an alignment makes little sense to be performed on top of structured data such as images which can be easily rescaled, we demonstrate how CHAMELEON can align latent embeddings of two image datasets generated with different neural networks.
2 RELATED WORK
Our goal is to extend recent few-shot classification approaches that make use of optimization-based meta-learning by adding a feature alignment component that casts different inputs to a common schema, presenting the first approach working across tasks with different schema. In this section, we will discuss various works related to our approach.
Research on transfer learning (Pan & Yang, 2010; Sung et al., 2018; Gligic et al., 2020) has shown that training a model on different auxiliary tasks before actually fitting it to the target problem can provide better results if training data is scarce. Motivated by this, few-shot learning approaches try to generalize to novel tasks with unseen classes given only a few instances by first meta-learning across a set of training tasks (Duan et al., 2017; Finn et al., 2017b; Snell et al., 2017). A task τ consists of predictor data Xτ , a target Yτ , a predefined training/test split τ = (X trainτ , Y train τ , X test τ , Y test τ ) and a loss function Lτ . Typically, an N -way K-shot problem refers to a few-shot learning problem where each task consists of N classes with K training samples per class.
Heterogeneous Transfer Learning tries to tackle a similar problem setting as described in this work. In contrast to regular Transfer Learning, the feature spaces of the auxiliary tasks and the actual task differ and are often non-overlapping (Day & Khoshgoftaar, 2017). Many approaches require co-occurence data i.e. instances that can be found in both datasets (Wu et al., 2019; Qi et al., 2011), rely on jointly optimizing separate models for each dataset to propagate information (Zhao & Hoi, 2010; Yan et al., 2016), or utilize meta-features (Feuz & Cook, 2015). Oftentimes, these approaches operate on structured data e.g. images and text with different data distributions for the tasks at hand (Li et al., 2019; He et al., 2019). These datasets can thus be embedded in a shared space with standard models such as convolutional neural networks and transformer-based language models. However, none of these approaches are capable of training a single encoder that operates across a meta-dataset of tasks with different schema for unstructured data.
Early approaches like (Fe-Fei et al., 2003) already investigated the few-shot learning setting by representing prior knowledge as a probability density function. In recent years, various works proposed new model-based meta-learning approaches which rapidly improved the state-of-the-art few-shot learning benchmarks. Most prominently, this includes methods which rely on learning an embedding space for non-parametric metric approaches during inference time (Vinyals et al., 2016; Snell et al., 2017), and approaches which utilize an external memory which stores information about previously seen classes (Santoro et al., 2016; Munkhdalai & Yu, 2017). Several more recent meta-learning approaches have been developed which introduce architectures and parameterization techniques specifically suited for few-shot classification (Mishra et al., 2018; Shi et al., 2019; Wang & Chen, 2020) while others try to extract useful meta-features from datasets to improve hyper-parameter optimization (Jomaa et al., 2019).
In contrast, Finn et al. (2017a) showed that an optimization-based approach, which solely adapts the learning paradigm can be sufficient for learning across tasks. Model Agnostic Meta-Learning (MAML) describes a model initialization algorithm that is capable of training an arbitrary model f across different tasks. Instead of sequentially training the model one task at a time, it uses update steps from different tasks to find a common gradient direction that achieves a fast convergence. In other words, for each meta-learning update, we would need an initial value for the model parameters θ. Then, we sample a batch of tasks T , and for each task τ ∈ T we find an updated version of θ using N examples from the task by performing gradient descent with learning rate α as in: θ′τ ← θ − α∇θLτ (fθ). The final update of θ with step size β will be:
θ ← θ − β 1|T |∇θ ∑ τ Lτ (fθ′τ ) (1)
Finn et al. (2017a) state that MAML does not require learning an update rule (Ravi & Larochelle, 2016), or restricting their model architecture (Santoro et al., 2016). They extended their approach by incorporating a probabilistic component such that for a new task, the model is sampled from a distribution of models to guarantee a higher model diversification for ambiguous tasks (Finn et al., 2018). However, MAML requires to compute second-order derivatives, resulting in a computationally heavy approach. Nichol et al. (2018b) extend upon the first-order approximation given as an ablation by Finn et al. (2018), which numerically approximates Equation (1) by replacing the second derivative with the weights difference, s.t. the update rule used in REPTILE is given by:
θ ← θ − β 1|T | ∑ τ (θ′τ − θ) (2)
which means we can use the difference between the previous and updated version as an approximation of the second-order derivatives to reduce computational cost. The serial version is presented in Algorithm (1).1 All of these approaches rely on a fixed schema, i.e. the same set of features with identical alignment across all tasks. However, many similar datasets only share a subset of their features, while oftentimes having a different order or representation e.g. latent embeddings for two different image datasets generated by training two similar architectures. Most current few-shot classification approaches sample tasks from a single dataset by selecting a random subset of classes; although it is possible to train a single meta-model on two different image datasets as shown by Munkhdalai & Yu (2017) and Tseng et al. (2020) since the images can be scaled to a fixed size. Further research demonstrates that it is possible to learn a single model across different output sizes (Drumond et al., 2020). Recently, a meta-dataset for few-shot classification of image tasks was also published to promote meta-learning across multiple datasets (Triantafillou et al., 2020). Optimizing a single model across various datasets requires a shared feature space. Thus, it is required to align the features which is achieved by simply rescaling all instances in the case of image data which is not trivial for unstructured data. Recent work relies on preprocessing images to a one-dimensional latent embedding with an additional deep neural network. The authors Rusu et al. (2019) train a Wide Residual Network (Zagoruyko & Komodakis, 2016) on the meta-training data of MiniImageNet (Vinyals et al., 2016) to compute latent embeddings of the data which are then used for few-shot classification, demonstrating state-of-the-art results.
Finding a suitable initialization for deep network has long been a focus of machine learning research. Especially the initialization of Glorot & Bengio (2010) and later He et al. (2015) which emphasize
1 Note that REPTILE does not require validation instances during meta-learning.
the importance of a scaled variance that depends on the layer inputs are widely used. Similar findings are also reported by Cao et al. (2019). Recently, Dauphin & Schoenholz (2019) showed that it is possible to learn a suitable initialization by optimizing the norms of the respective weights. So far, none of these methods tried to learn a common initialization across tasks with different schema.
We propose a novel feature alignment component named CHAMELEON, which enables state-of-the-art methods to learn how to work on top of tasks whose feature vector differ not only in their length but also their concrete alignment. Our model shares resemblance with scaled dot-product attention popularized by (Vaswani et al., 2017):
Attention(Q,K, V ) = softmax( QKT√ dK )V (3)
where Q, K and V are matrices describing queries, keys and values, and dK is the dimensionality of the keys such that the softmax computes an attention mask which is then multiplied with the values V . In contrast to this, we pretrain the parametrized model CHAMELEON to compute a soft permutation matrix which can realign features across tasks with varying schema when multiplied with V instead of computing a simple attention mask.
Algorithm 1 REPTILE Nichol et al. (2018b) Input: Meta-dataset T = {(X1, Y1,L1), ..., (X|T |, Y|T |,L|T |)}, learning rate β
1: Randomly initialize parameters θ of model f 2: for iteration = 1, 2, ... do 3: Sample task (Xτ , Yτ ,Lτ ) ∼ T 4: θ′ ← θ 5: for k steps = 1,2,... do 6: θ′ ← θ′ − α∇θ′Lτ (Yτ , f(Xτ ; θ′)) 7: end for 8: θ ← θ − β(θ′ − θ) 9: end for
10: return parameters θ of model f
3 METHODOLOGY
3.1 PROBLEM SETTING
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. The schema of a task τ then describes not only the number and order, but also the semantics of predictor variables {pτ1 , pτ2 , . . . , pτF } in Xtrainτ . Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. Methods like REPTILE and MAML try to find the best initialization for a specific model, in this work referred to as ŷ, to operate on a set T of similar tasks. However, every task τ has to share the same schema of common size K, where similar features shared across tasks are in the same position. A feature-order invariant encoder is needed to map the data representation Xτ of tasks with varying input schema and feature length Fτ to a shared latent representation X̃τ with fixed feature length K:
enc: X −→ XK , Xτ ∈ RN×Fτ 7−→ X̃τ ∈ RN×K (4)
where N represents the number of instances in Xτ , Fτ is the number of features of task τ which varies across tasks, and K is the size of the desired feature space. By combining this encoder with model ŷ that works on a fixed input size K and outputs the predicted target e.g. binary classification, it is possible to apply the REPTILE algorithm to learn an initialization θinit across tasks with different schema. The optimization objective then becomes the meta-loss for the combined network f = ŷ ◦ enc over a set of tasks T :
argmin θinit
Eτ∼T Lτ ( Y testτ , f ( X testτ ; θ (u) τ )) s.t. θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , f ; θ init ) (5)
where θinit is the set of initial weights for the combined network f consisting of enc with parameters θenc and model ŷ with parameters θŷ, and θ (u) τ are the updated weights after applying the learning procedure A for u iterations on the task τ as defined in Algorithm 1 for the inner updates of REPTILE. It is important to mention that learning one weight parameterization across any heterogeneous set of tasks is extremely difficult since it is most likely impossible to find one initialization for two tasks with a vastly different number and type of features. By contrast, if two tasks share similar features, one can align the similar features to a common representation so that a model can directly learn across different tasks by transforming the tasks as illustrated in Figure 1.
3.2 CHAMELEON
Consider a set of tasks where a right stochastic matrix Πτ exists for each task that reorders predictor data Xτ into X̃τ having the same schema for every task τ ∈ T :
X̃τ = Xτ ·Πτ ,where (6) x̃1,1 . . . x̃1,K... . . . ... x̃N,1 . . . x̃N,K ︸ ︷︷ ︸
X̃τ
= x1,1 . . . x1,Fτ... . . . ... xN,1 . . . xN,Fτ ︸ ︷︷ ︸
Xτ
· π1,1 . . . π1,K... . . . ... πFτ ,1 . . . πFτ ,K ︸ ︷︷ ︸
Πτ
Every xm,n represents the feature n of sample m. Every πm,n represent how much of feature m (from samples in Xτ ) should be shifted to position n in the adapted input X̃τ . Finally, every x̃m,n represent the new feature n of sample m in Xτ with the adpated shape and size. In order to achieve the same X̃τ when permuting two features of a task Xτ , we must simply permute the corresponding rows in Πτ to achieve the same X̃τ . Since Πτ is a right stochastic matrix, the summation for every row of Πτ is set to be equal to 1 as in ∑ i πj,i = 1, so that each value in Πτ simply states how much a feature is shifted to a corresponding position. For example: Consider that task a has features [apples, bananas, melons] and task b features [lemons, bananas, apples]. Both can be transformed to the same representation [apples, lemons, bananas, melons] by replacing missing features with zeros and reordering them. This transformation must have the same result for a and b independent of their feature order. In a real life scenario, features might come with different names or sometimes their similarity is not clear to the human eye. Note that a classic autoencoder is not capable of this as it is not invariant to the order of the features. Our proposed component, denoted by Φ, takes a task as
input and outputs the corresponding reordering matrix:
Φ(Xτ , θenc) = Π̂τ (7)
The function Φ is a neural network parameterized by θenc. It consists of three 1D-convolutions, where the last one is the output layer that estimates the alignment matrix via a softmax activation. The input is first transposed to size [Fτ ×N ] (where N is the number of samples) i.e., each feature is represented by a vector of instances. Each convolution has kernel length 1 (as the order of instances is arbitrary and thus needs to be permutation invariant) and a channel output size of 8, 16, and lastly K. The result is a reordering matrix displaying the relation of every original feature to each of the K features in the target space. Each of these vectors passes through a softmax layer, computing the ratio of features in Xτ shifted to each position of X̃τ . Finally, the reordering matrix can be multiplied with the input to compute the aligned task as defined in Equation (6). By using a kernel length of 1 in combination with the final matrix multiplication, the full architecture becomes permutation invariant in the feature dimension. Column-wise permuting the features of an input task leads to the corresponding row-wise permutation of the reordering matrix. Thus, multiplying both matrices results in the same aligned output independent of permutation. The overall architecture can be seen in Figure 2. The encoder necessary for training across tasks with different predictor vectors with REPTILE by optimizing Equation (5) is then given as:
enc: Xτ 7−→ Xτ · Φ(Xτ , θenc) = Xτ · Π̂τ (8)
3.3 REORDERING TRAINING
Only joint-training the network ŷ ◦ enc as described above, will not teach CHAMELEON denoted by Φ how to reorder the features to a shared representation. That is why it is necessary to train Φ specifically with the objective of reordering features (reordering training). In order to do so, we optimize Φ to align novel tasks by training on a set of tasks for which the reordering matrix Πτ exists such that it maps τ to the shared representation. In other words, we require a meta-dataset that contains not only a set of similar tasks τ ∈ T with different schema, but also the position for each feature in the shared representation given by a permutation matrix. If Πτ is known beforehand for each τ ∈ T , optimizing Chameleon becomes a simple supervised classification task based on predicting the new position of each feature in τ . Thus, we can minimize the expected reordering loss over the meta-dataset:
θenc = argmin θenc
Eτ∼T LΦ ( Πτ , Π̂τ ) (9)
where LΦ is the softmax cross-entropy loss, Πτ is the ground-truth (one-hot encoding of the new position for each variable), and Π̂τ is the prediction. This training procedure can be seen in Algorithm (2). The trained CHAMELEON model can then be used to compute the Πτ for any unseen task τ ∈ T .
Algorithm 2 Reordering Training Input: Meta-dataset T = {(X1,Π1), ..., (X|T |,Π|T |)}, latent dimension K, learning rate γ
1: Randomly initialize parameters θenc of the CHAMELEON model 2: for training iteration = 1, 2, ... do 3: randomly sample τ ∼ T 4: θenc ←− θenc − γ∇LΦ(Πτ ,Φ(Xτ , θenc)) 5: end for 6: return Trained parameters θenc of the CHAMELEON model
After this training procedure, we can use the learned weights as initialization for Φ before optimizing ŷ ◦ enc with REPTILE without further using LΦ. Experiments show that this procedure improves our results significantly compared to only optimizing the joint meta-loss.
Training the CHAMELEON component to reorder similar tasks to a shared representation not only requires a meta-dataset but one where the true reordering matrix Πτ is provided for every task. In application, this means manually matching similar features of different training tasks so that novel tasks can be matched automatically. However, it is possible to sample a broad number of tasks from a
single dataset by sampling smaller sub-tasks from it, selecting a random subset of features in arbitrary order for N random instances. Thus, it is not necessary to manually match the features since all these sub-tasks share the same Π̂τ apart from the respective permutation of the rows as mentioned above.
4 EXPERIMENTAL RESULTS
Baseline and Setup In order to evaluate the proposed method, we investigate the combined model ŷ ◦ enc with the initialization for enc obtained by pretraining CHAMELEON as defined in Equation 9 before using REPTILE to jointly optimize ŷ ◦ enc. We compare the performance with an initialization obtained by running REPTILE on the base model ŷ by training on tasks padded to a fixed size K as ŷ is not schema invariant. Both initializations are then compared to the performance of model ŷ with random Glorot initialization (Glorot & Bengio, 2010) (referred to as Random). In all of our experiments, we measure the performance of a model and its initialization by evaluating the validation data of a task after performing three update steps on the respective training data. All experiments are conducted in two variants: In Split experiments, test tasks contain novel features in addition to features seen during meta-training. In contrast, test tasks in No-Split experiments only consist of features seen during meta-training. While the Split experiments evaluate the performance of the model when faced with novel features during meta-testing, the No-Split experiments can be used to compare against a perfect alignment by repeating the baseline experiment with tasks that are already aligned (referred to as Oracle). A detailed description of the utilized models is found in Appendix B.
Meta-Datasets For our main experiments, we utilize a single dataset as meta-dataset by sampling the training and test tasks from it. This allows us to evaluate our method on different domains without matching related datasets since Π̂τ is naturally given for a subset of permuted features. Novel features can also be introduced during testing by splitting not only the instances but also the features of a dataset in train and test partition (Split). Training tasks are then sampled by selecting a random subset of the training features in arbitrary order forN instances. Stratified sampling guarantees that test tasks contain both features from train and test while sampling the instances from the test set only. For all experiments, 75% of the instances are used for reordering training of CHAMELEON and joint-training of the full architecture, and 25% for sampling test tasks. For Split experiments, we further impose a train-test split on the features (20% of the features are restricted to the test split). Our work is built on top of REPTILE (Nichol et al., 2018b) but can be used in conjunction with any model-agnostic meta-learning method. We opted to use REPTILE since it does not require second-order derivatives, and the code is publicly available (Nichol et al., 2018a) while also being easy to adapt to our problem.
Main Results We evaluate our approach using the OpenML-CC18 benchmark (Bischl et al., 2017) from which we selected 23 datasets for few-shot classification. The details of all datasets utilized in this work are summarized in Appendix B. The results in Figure 3 display the model performance after performing three update steps on a novel test task to illustrate the faster convergence. The graph shows a clear performance lift when using the proposed architecture after pretraining it to reorder tasks. This demonstrates to the best of our knowledge the first few-shot classification approach, which successfully learns across tasks with varying schemas (contribution 2). Furthermore, in the No-Split results one can see that the performance of the proposed method approaches the Oracle performance, which suggests an ideal feature alignment. When adding novel features during test time (Split) CHAMELEON is still able to outperform the other setups although with a lower margin.
Ablations We visualize the result of pretraining CHAMELEON on the Wine dataset (from OpenMLCC18) in Figure 6 to show that the proposed model is capable of learning the correct alignment between tasks. One can see that the component manages to learn the true feature position in almost all cases. Moreover, this illustration does also show that CHAMELEON can be used to compute the similarity between different features by indicating which pairs are confused most often. For example, features two and four are showing a strong correlation, which is very plausible since they depict the free sulfur dioxide and total sulfur dioxide level of the wine. This demonstrates that our proposed architecture is able to learn an alignment between different feature spaces (contribution 1).
Furthermore, we repeat the experiments on the OpenML-CC18 benchmark in two ablation studies to measure the impact of joint-training and the proposed reordering training (Algorithm 2). First, we do not train CHAMELEON with Equation 9, but only jointly train ŷ ◦ enc with REPTILE to evaluate the influence of adding additional parameters to the network without pretraining it. Secondly, we use REPTILE only to update the initialization for the parameters of ŷ while freezing the pretrained parameters of enc in order to assess the effect of joint-training both network components. These two variants are referred to as Untrain and Frozen. We compare these ablations to our approach by conducting a Wilcoxon signed-rank test (Wilcoxon, 1992) with Holm’s alpha correction (Holm, 1979). The results are displayed in the form of a critical difference diagram (Demšar, 2006; Ismail Fawaz et al., 2019) presented in Figure 4. The diagram shows the ranked performance of each model and whether they are statistically different. The results confirm that our approach leads to statistically significant improvements over the random and REPTILE baselines when pretraining CHAMELEON. Similarly, our approach is also significantly better than jointly training the full architecture without pretraining CHAMELEON (UNTRAIN), confirming that the improvements do not stem from the increased model capacity. Finally, comparing the results to the FROZEN model shows improvements that are not significant, indicating that a near-optimal alignment was already found during pretraining. A detailed overview for all experimental results is given in Appendix C.
Latent Embeddings Experiments Learning to align features is only feasible for unstructured data since this approach would not preserve any structure. However, it is a widespread practice among few-shot classification methods, and computer vision approaches in general, to use a pretrained model to embed image data into a latent space before applying further operations. We can use CHAMELEON to align the latent embeddings of image datasets that are generated with different networks. Thus, it is possible to use latent embeddings for meta-training while evaluating on novel tasks that are not yet embedded in case the embedding network is not available, or the complexity of different datasets requires models with different capacities to extract useful features. We conduct an additional experiment for which we combine two similar image datasets, namely EMNIST-Digits and EMNIST-Letters (Cohen et al., 2017). Similar to the work of Rusu et al. (2019), we train one neural network on each dataset in order to generate similar latent embeddings with different schema, namely 32 and 64 latent features. Afterward, we can sample training tasks from one embedding while
evaluating on tasks sampled from the other one. In the combined experiments, the full training is performed on the EMNIST-Letters dataset, while EMNIST-Digits is used for testing. Splitting the features is not necessary as the train, and test features are coming from different datasets. The results of this experiment are displayed in Figure 5. It shows the accuracy of EMNIST-Digits averaged across 5 runs with 1,600 generated tasks per run during the REPTILE training on EMNIST-Letters for the different model variants. Each test task is evaluated by performing 3 update steps on the training samples and measuring the accuracy of its validation data afterward. One can see that our proposed approach reports a significantly higher accuracy than the REPTILE baseline after performing three update steps on a task (contribution 4). Thus, showing that CHAMELEON is able to transfer knowledge from one dataset to another. Moreover, simply adding CHAMELEON without pretraining it to reorder tasks (Untrain) does not lead to any improvement. This might be sparked by using a CHAMELEON component that has a much lower number of parameters than the base network. Only by adding the reordering-training, the model manages to converge to a suitable initialization. In contrast to our experiments on the OpenML datasets, freezing the weights of CHAMELEON after pretraining also fails to give an improvement, suggesting that the pretraining did not manage to capture the ideal alignment, but enables learning it during joint-training. Our code is available at BLIND-REVIEW.
5 CONCLUSION
In this paper, we presented, to the best of our knowledge, the first approach to tackle few-shot classification for unstructured tasks with different schema. Our model component CHAMELEON is capable of embedding tasks to a common representation by computing a matrix that can reorder the features. For this, we propose a novel pretraining framework that is shown to learn useful permutations across tasks in a supervised fashion without requiring actual labels. In experiments on 23 datasets of the OpenML-CC18 benchmark, our method shows significant improvements even when presented with features not seen during training. Furthermore, by aligning different latent embeddings we demonstrate how a single meta-model can be used to learn across multiple image datasets each embedded with a distinct network.
A APPENDIX - INNER TRAINING
We visualize the inner training for one of the experiments in Figure 7. It shows two exemplary snapshots of the inner test loss when training on a sampled task with the current initialization θinit before meta-learning and after 20,000 meta-epochs. It is compared to the test loss of the model when it is trained on the same task starting with the random initialization. For this experiment, models were trained until convergence. Note that both losses are not identical in meta-epoch 0 because the CHAMELEON component is already pretrained. The snapshots show the expected REPTILE behavior, namely a faster convergence when using the currently learned initialization compared to a random one.
B APPENDIX - EXPERIMENTAL DETAILS
The features of each dataset are normalized between 0 and 1. The Split experiments are limited to the 21 datasets which have more than four features in order to perform a feature split. We sample 10 training and 10 validation instances per label for a new task, and 16 tasks per meta-batch. The number of classes in a task is given by the number of classes of the respective dataset, as shown in Table 1. During the reordering-training phase and the inner updates of reptile, specified in line 6 of Algorithm (1), we use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.0001 and 0.001 respectively. The meta-updates of REPTILE are carried out with a learning rate β of 0.01. The reordering-training phase is run for 4000 epochs. All results reported in this work are averaged over 5 runs.
OpenML-CC18 All experiments on the OpenML-CC18 benchmark are conducted with the same model architecture. The base model ŷ is a feed-forward neural network with two dense hidden layers that have 16 neurons each. CHAMELEON consists of two 1D-convolutions with 8 and 16 filters respectively and a final convolution that maps the task to the feature-length K, as shown in Figure 2. We selected dataasets that have up to 33 features and a minimum number of 90 instances per class. We limited the number of features and model capacity because this work seeks to establish a proof of concept for learning across data with different schemas. In contrast, very high-dimensional data would require tuning a more complex CHAMELEON architecture. The details for each dataset are summarized in Appendix 1. When sampling a task in Split, we sample between 40% and 60% of the respective training features. For test tasks in Split experiments 20% of the features are sampled from the set of test features to evaluate performance on similar tasks with partially novel features. For each
experimental run, the different variants are tested on the same data split, and we sample 1600 test tasks beforehand, while the training tasks are randomly sampled each epoch. All experiments are repeated five times with different instance and, in the case of Split, different feature splits, and the results are averaged.
Latent Embeddings Both networks used for generating the latent embeddings consist of two convolutional and two dense hidden layers with 64 neurons each, but the number of neurons in the output layer is 32 for EMNIST-Digits and 64 for EMNIST-Letters. For these experiments, the CHAMELEON component still has two convolutional layers with 8 and 16 filters, while we use a larger base network with two feed-forward layers with 64 neurons each. All experimental results are averaged over five runs.
C APPENDIX - TABLES WITH EXPERIMENTS RESULTS
The following tables show the detailed results of our experiments on the OpenML-CC18 datasets for Split and NoSplit settings. The tables contain the loss and accuracy for the the base model ŷ trained from a random initialization and with REPTILE, and our proposed model ŷ ◦ enc with the additional ablation studies Untrain and Frozen:
D PROBLEM SETTING: GENERAL MULTI-TASK LEARNING.
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. As elucidated in the previous section, our goal is to construct an encoder that learns to match these predictors and map the features of any task τ ∈ T into a shared latent space RK . enc: X −→ XK , X ∈ RN×F 7−→ X̃ ∈ RN×K (10) This encoder can be combined with a parametric model of fixed input size ŷ : RK → {0, 1} (e.g. neural network or SVM) such that for the joint model ŷ ◦ enc an initialization θinit can be learned via MAML or REPTILE across all tasks, even when those may not have the same predictor vector. Just as with MAML, this initialization facilitates rapid convergence of the combined model ŷ ◦ enc on any new, previously unseen task T ∈ T test. More explicitly, the ultimate goal is to minimize the meta test loss
L (θinit) := ETτ∼T testLτ ( Y testτ , ŷ ◦ enc ( X testτ ; θ (u) τ )) (11)
here Lτ is the task specific loss (e.g. miss-classification rate) of the model on the test data of Tτ , using the updated parameters θ(u)τ . The latter are the updated parameters of the joint model ŷ ◦ enc which are obtained by minimizing Lτ on the training data (X train, Y train) of Tτ via some learning iterative learning algorithm A (e.g. Gradient Descent) for u iterations.
θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , ŷ ◦ enc; θinit ) (12)
MAML and REPTILE are solving sub-problems when the number F of features is fixed and the predictors of all tasks are the same and aligned, i.e., the same predictor always occurs at the same position within the predictor vector, thus the identity can be used as predictor encoder. This problem alternatively can be described as a supervised learning problem with a multivariate or structured target. | 1. What is the main contribution of the paper regarding meta-learning across tasks with different input data types?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to share information across tasks?
3. How does the reviewer assess the clarity and quality of the paper's content, including the problem statement, method description, and experiments?
4. Do you have any questions regarding the paper's definitions and terminology, such as the meaning of "schema"?
5. Are there any concerns about the paper's experimental design and results, especially regarding the choice of datasets and feature sub-sampling?
6. How does the reviewer evaluate the novelty and significance of the paper's contributions in the context of prior works in few-shot learning and meta-learning? | Review | Review
Summary: This paper aims to perform meta-learning across tasks that have different input data types by learning separate task-specific encoders, and then aligning the features produced by these encoders before making predictions.
Pros: Sharing information across tasks with different input types is a relevant problem Cons: Precise problem statement and method very unclear Experiments are only on toy datasets
Detailed Comments:
It is not clear from the abstract / introduction what is meant by “schema.” From the abstract: “for example, if the number of predictors varies across tasks, while they still share some variables.” Does this refer to the number of classes in a few-shot problem? What variables are shared? Classes, or input features? Later in the intro: “training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order.” These definitions of schema do not seem to be the same. Schema also does not seem to be defined in Section 3. At the beginning of that section it says, “every task has to share the same schema of common size K” which seems to indicate “schema” is the number of features and then a few lines later, “ tasks with varying input schema and feature length F” which seems to indicate “schema” is not the number of features.
In the related work section, few-shot learning did not begin in 2017 as might be suggested by the citations. It would be good to recognize the earlier works in this area, such as Fei-Fei, L. et al. A bayesian approach to unsupervised one-shot learning of object categories. 2003 A Bayesian framework for concept learning. PhD thesis, Massachusetts Institute of Technology, 1999. For few-shot learning with deep learning, Matching Networks should arguably be cited: Vinyals, Oriol, et al. Matching networks for one shot learning. 2016. The original MAML paper actually proposed the first-order version of MAML, Nichol et al. was not the first to propose this.
I don’t understand how the method works when the features are learned and not given. For example, the encoder for EMNIST-Digits produces 32 features, while the encoder for EMNIST-Letters produces 64 features. If the meta-training tasks are drawn from only EMNIST-Digits, then how can the “re-ordering” matrix be learned from EMNIST-Digits such that it can re-order features from EMNIST-Letters? At the most basic level, based on Figure 2, the matrix \Pi would have to have different dimensionality for each dataset. Even if they were the same dimensionality, how is the feature ordering supervision performed in this case?
In the “main results”, if you sub-sample features, how do you know that the sub-sampled features have enough information to perform the classification task?
It would be helpful to have an experiment on a less-toy dataset, both to demonstrate that the problem of “mis-aligned features” exists in more complex data, and that the method can address it.
Overall, this paper is extremely confusing. I do not understand the problem statement or how the method is trained in the learned feature case. In my view, the clarity of this paper needs to be significantly improved to consider acceptance. |
ICLR | Title
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
Abstract
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. Recent work shows that an initial parameter set can be learned from a population of supervised learning tasks that enables a fast convergence for unseen tasks even when only a handful of instances is available (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type, and semantics of predictor and target variables. In this paper, we address the problem of meta-learning weight initialization across tasks with different schemas, for example, if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. In experiments on 23 datasets of the OpenML-CC18 benchmark, we show that Chameleon can successfully learn parameter initializations across tasks with different schemas, presenting, to the best of our knowledge, the first cross-dataset few-shot classification approach for unstructured data.
1 INTRODUCTION
Humans require only a few examples to correctly classify new instances of previously unknown objects. For example, it is sufficient to see a handful of images of a specific type of dog before being able to classify dogs of this type consistently. In contrast, deep learning models optimized in a classical supervised setup usually require a vast number of training examples to match human performance. A striking difference is that a human has already learned to classify countless other objects, while parameters of a neural network are typically initialized randomly. Previous approaches improved this starting point for gradient-based optimization by choosing a more robust random initialization (He et al., 2015) or by starting from a pretrained network (Pan & Yang, 2010). Still, models do not learn from only a handful of training examples even when applying these techniques. Moreover, established hyperparameter optimization methods (Schilling et al., 2016) are not capable of optimizing the model initialization due to the high-dimensional parameter space. Few-shot classification aims at correctly classifying unseen instances of a novel task with only a few labeled training instances given. This is typically accomplished by meta-learning across a set of training tasks, which consist of training and validation examples with given labels for a set of classes. The field has gained immense popularity among researchers after recent meta-learning approaches have shown that it is possible to learn a weight initialization across different tasks, which facilitates a faster convergence speed and thus enables classifying novel classes after seeing only a few instances (Finn et al., 2018). However, training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order. For that reason, most approaches demonstrate their performance on image data, which can be easily scaled to a fixed shape, whereas transforming unstructured data to a uniform schema is not trivial.
We want to extend popular approaches to operate invariant of schema, i.e., independent of order and shape, making it possible to use meta-learning approaches on unstructured data with varying feature spaces, e.g., learning a model from heart disease data that can accurately classify a few-shot task for diabetes detection that relies on similar features. Thus, we require a schema-invariant encoder that maps heart disease and diabetes data to one feature representation, which then can be used to train a single model via popular meta-learning algorithms like REPTILE (Nichol et al., 2018b).
We propose a set-wise feature transformation model called CHAMELEON, named after a REPTILE capable of adjusting its colors according to the environment in which it is located. CHAMELEON projects different schemas to a fixed input space while keeping features from different tasks but of the same type or distribution in the same position, as illustrated by Figure 1. Our model learns to compute a task-specific reordering matrix that, when multiplied with the original input, aligns the schema of unstructured tasks to a common representation while behaving invariant to the order of input features.
Our main contributions are as follows: (1) We show how our proposed method CHAMELEON can learn to align varying feature spaces to a common representation. (2) We propose the first approach to tackle few-shot classification for tasks with different schemas. (3) In experiments on 23 datasets of the OpenML-CC18 benchmark (Bischl et al., 2017) collection, we demonstrate how current meta-learning approaches can successfully learn a model initialization across tasks with different schemas as long as they share some variables with respect to their type or semantics. (4) Although an alignment makes little sense to be performed on top of structured data such as images which can be easily rescaled, we demonstrate how CHAMELEON can align latent embeddings of two image datasets generated with different neural networks.
2 RELATED WORK
Our goal is to extend recent few-shot classification approaches that make use of optimization-based meta-learning by adding a feature alignment component that casts different inputs to a common schema, presenting the first approach working across tasks with different schema. In this section, we will discuss various works related to our approach.
Research on transfer learning (Pan & Yang, 2010; Sung et al., 2018; Gligic et al., 2020) has shown that training a model on different auxiliary tasks before actually fitting it to the target problem can provide better results if training data is scarce. Motivated by this, few-shot learning approaches try to generalize to novel tasks with unseen classes given only a few instances by first meta-learning across a set of training tasks (Duan et al., 2017; Finn et al., 2017b; Snell et al., 2017). A task τ consists of predictor data Xτ , a target Yτ , a predefined training/test split τ = (X trainτ , Y train τ , X test τ , Y test τ ) and a loss function Lτ . Typically, an N -way K-shot problem refers to a few-shot learning problem where each task consists of N classes with K training samples per class.
Heterogeneous Transfer Learning tries to tackle a similar problem setting as described in this work. In contrast to regular Transfer Learning, the feature spaces of the auxiliary tasks and the actual task differ and are often non-overlapping (Day & Khoshgoftaar, 2017). Many approaches require co-occurence data i.e. instances that can be found in both datasets (Wu et al., 2019; Qi et al., 2011), rely on jointly optimizing separate models for each dataset to propagate information (Zhao & Hoi, 2010; Yan et al., 2016), or utilize meta-features (Feuz & Cook, 2015). Oftentimes, these approaches operate on structured data e.g. images and text with different data distributions for the tasks at hand (Li et al., 2019; He et al., 2019). These datasets can thus be embedded in a shared space with standard models such as convolutional neural networks and transformer-based language models. However, none of these approaches are capable of training a single encoder that operates across a meta-dataset of tasks with different schema for unstructured data.
Early approaches like (Fe-Fei et al., 2003) already investigated the few-shot learning setting by representing prior knowledge as a probability density function. In recent years, various works proposed new model-based meta-learning approaches which rapidly improved the state-of-the-art few-shot learning benchmarks. Most prominently, this includes methods which rely on learning an embedding space for non-parametric metric approaches during inference time (Vinyals et al., 2016; Snell et al., 2017), and approaches which utilize an external memory which stores information about previously seen classes (Santoro et al., 2016; Munkhdalai & Yu, 2017). Several more recent meta-learning approaches have been developed which introduce architectures and parameterization techniques specifically suited for few-shot classification (Mishra et al., 2018; Shi et al., 2019; Wang & Chen, 2020) while others try to extract useful meta-features from datasets to improve hyper-parameter optimization (Jomaa et al., 2019).
In contrast, Finn et al. (2017a) showed that an optimization-based approach, which solely adapts the learning paradigm can be sufficient for learning across tasks. Model Agnostic Meta-Learning (MAML) describes a model initialization algorithm that is capable of training an arbitrary model f across different tasks. Instead of sequentially training the model one task at a time, it uses update steps from different tasks to find a common gradient direction that achieves a fast convergence. In other words, for each meta-learning update, we would need an initial value for the model parameters θ. Then, we sample a batch of tasks T , and for each task τ ∈ T we find an updated version of θ using N examples from the task by performing gradient descent with learning rate α as in: θ′τ ← θ − α∇θLτ (fθ). The final update of θ with step size β will be:
θ ← θ − β 1|T |∇θ ∑ τ Lτ (fθ′τ ) (1)
Finn et al. (2017a) state that MAML does not require learning an update rule (Ravi & Larochelle, 2016), or restricting their model architecture (Santoro et al., 2016). They extended their approach by incorporating a probabilistic component such that for a new task, the model is sampled from a distribution of models to guarantee a higher model diversification for ambiguous tasks (Finn et al., 2018). However, MAML requires to compute second-order derivatives, resulting in a computationally heavy approach. Nichol et al. (2018b) extend upon the first-order approximation given as an ablation by Finn et al. (2018), which numerically approximates Equation (1) by replacing the second derivative with the weights difference, s.t. the update rule used in REPTILE is given by:
θ ← θ − β 1|T | ∑ τ (θ′τ − θ) (2)
which means we can use the difference between the previous and updated version as an approximation of the second-order derivatives to reduce computational cost. The serial version is presented in Algorithm (1).1 All of these approaches rely on a fixed schema, i.e. the same set of features with identical alignment across all tasks. However, many similar datasets only share a subset of their features, while oftentimes having a different order or representation e.g. latent embeddings for two different image datasets generated by training two similar architectures. Most current few-shot classification approaches sample tasks from a single dataset by selecting a random subset of classes; although it is possible to train a single meta-model on two different image datasets as shown by Munkhdalai & Yu (2017) and Tseng et al. (2020) since the images can be scaled to a fixed size. Further research demonstrates that it is possible to learn a single model across different output sizes (Drumond et al., 2020). Recently, a meta-dataset for few-shot classification of image tasks was also published to promote meta-learning across multiple datasets (Triantafillou et al., 2020). Optimizing a single model across various datasets requires a shared feature space. Thus, it is required to align the features which is achieved by simply rescaling all instances in the case of image data which is not trivial for unstructured data. Recent work relies on preprocessing images to a one-dimensional latent embedding with an additional deep neural network. The authors Rusu et al. (2019) train a Wide Residual Network (Zagoruyko & Komodakis, 2016) on the meta-training data of MiniImageNet (Vinyals et al., 2016) to compute latent embeddings of the data which are then used for few-shot classification, demonstrating state-of-the-art results.
Finding a suitable initialization for deep network has long been a focus of machine learning research. Especially the initialization of Glorot & Bengio (2010) and later He et al. (2015) which emphasize
1 Note that REPTILE does not require validation instances during meta-learning.
the importance of a scaled variance that depends on the layer inputs are widely used. Similar findings are also reported by Cao et al. (2019). Recently, Dauphin & Schoenholz (2019) showed that it is possible to learn a suitable initialization by optimizing the norms of the respective weights. So far, none of these methods tried to learn a common initialization across tasks with different schema.
We propose a novel feature alignment component named CHAMELEON, which enables state-of-the-art methods to learn how to work on top of tasks whose feature vector differ not only in their length but also their concrete alignment. Our model shares resemblance with scaled dot-product attention popularized by (Vaswani et al., 2017):
Attention(Q,K, V ) = softmax( QKT√ dK )V (3)
where Q, K and V are matrices describing queries, keys and values, and dK is the dimensionality of the keys such that the softmax computes an attention mask which is then multiplied with the values V . In contrast to this, we pretrain the parametrized model CHAMELEON to compute a soft permutation matrix which can realign features across tasks with varying schema when multiplied with V instead of computing a simple attention mask.
Algorithm 1 REPTILE Nichol et al. (2018b) Input: Meta-dataset T = {(X1, Y1,L1), ..., (X|T |, Y|T |,L|T |)}, learning rate β
1: Randomly initialize parameters θ of model f 2: for iteration = 1, 2, ... do 3: Sample task (Xτ , Yτ ,Lτ ) ∼ T 4: θ′ ← θ 5: for k steps = 1,2,... do 6: θ′ ← θ′ − α∇θ′Lτ (Yτ , f(Xτ ; θ′)) 7: end for 8: θ ← θ − β(θ′ − θ) 9: end for
10: return parameters θ of model f
3 METHODOLOGY
3.1 PROBLEM SETTING
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. The schema of a task τ then describes not only the number and order, but also the semantics of predictor variables {pτ1 , pτ2 , . . . , pτF } in Xtrainτ . Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. Methods like REPTILE and MAML try to find the best initialization for a specific model, in this work referred to as ŷ, to operate on a set T of similar tasks. However, every task τ has to share the same schema of common size K, where similar features shared across tasks are in the same position. A feature-order invariant encoder is needed to map the data representation Xτ of tasks with varying input schema and feature length Fτ to a shared latent representation X̃τ with fixed feature length K:
enc: X −→ XK , Xτ ∈ RN×Fτ 7−→ X̃τ ∈ RN×K (4)
where N represents the number of instances in Xτ , Fτ is the number of features of task τ which varies across tasks, and K is the size of the desired feature space. By combining this encoder with model ŷ that works on a fixed input size K and outputs the predicted target e.g. binary classification, it is possible to apply the REPTILE algorithm to learn an initialization θinit across tasks with different schema. The optimization objective then becomes the meta-loss for the combined network f = ŷ ◦ enc over a set of tasks T :
argmin θinit
Eτ∼T Lτ ( Y testτ , f ( X testτ ; θ (u) τ )) s.t. θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , f ; θ init ) (5)
where θinit is the set of initial weights for the combined network f consisting of enc with parameters θenc and model ŷ with parameters θŷ, and θ (u) τ are the updated weights after applying the learning procedure A for u iterations on the task τ as defined in Algorithm 1 for the inner updates of REPTILE. It is important to mention that learning one weight parameterization across any heterogeneous set of tasks is extremely difficult since it is most likely impossible to find one initialization for two tasks with a vastly different number and type of features. By contrast, if two tasks share similar features, one can align the similar features to a common representation so that a model can directly learn across different tasks by transforming the tasks as illustrated in Figure 1.
3.2 CHAMELEON
Consider a set of tasks where a right stochastic matrix Πτ exists for each task that reorders predictor data Xτ into X̃τ having the same schema for every task τ ∈ T :
X̃τ = Xτ ·Πτ ,where (6) x̃1,1 . . . x̃1,K... . . . ... x̃N,1 . . . x̃N,K ︸ ︷︷ ︸
X̃τ
= x1,1 . . . x1,Fτ... . . . ... xN,1 . . . xN,Fτ ︸ ︷︷ ︸
Xτ
· π1,1 . . . π1,K... . . . ... πFτ ,1 . . . πFτ ,K ︸ ︷︷ ︸
Πτ
Every xm,n represents the feature n of sample m. Every πm,n represent how much of feature m (from samples in Xτ ) should be shifted to position n in the adapted input X̃τ . Finally, every x̃m,n represent the new feature n of sample m in Xτ with the adpated shape and size. In order to achieve the same X̃τ when permuting two features of a task Xτ , we must simply permute the corresponding rows in Πτ to achieve the same X̃τ . Since Πτ is a right stochastic matrix, the summation for every row of Πτ is set to be equal to 1 as in ∑ i πj,i = 1, so that each value in Πτ simply states how much a feature is shifted to a corresponding position. For example: Consider that task a has features [apples, bananas, melons] and task b features [lemons, bananas, apples]. Both can be transformed to the same representation [apples, lemons, bananas, melons] by replacing missing features with zeros and reordering them. This transformation must have the same result for a and b independent of their feature order. In a real life scenario, features might come with different names or sometimes their similarity is not clear to the human eye. Note that a classic autoencoder is not capable of this as it is not invariant to the order of the features. Our proposed component, denoted by Φ, takes a task as
input and outputs the corresponding reordering matrix:
Φ(Xτ , θenc) = Π̂τ (7)
The function Φ is a neural network parameterized by θenc. It consists of three 1D-convolutions, where the last one is the output layer that estimates the alignment matrix via a softmax activation. The input is first transposed to size [Fτ ×N ] (where N is the number of samples) i.e., each feature is represented by a vector of instances. Each convolution has kernel length 1 (as the order of instances is arbitrary and thus needs to be permutation invariant) and a channel output size of 8, 16, and lastly K. The result is a reordering matrix displaying the relation of every original feature to each of the K features in the target space. Each of these vectors passes through a softmax layer, computing the ratio of features in Xτ shifted to each position of X̃τ . Finally, the reordering matrix can be multiplied with the input to compute the aligned task as defined in Equation (6). By using a kernel length of 1 in combination with the final matrix multiplication, the full architecture becomes permutation invariant in the feature dimension. Column-wise permuting the features of an input task leads to the corresponding row-wise permutation of the reordering matrix. Thus, multiplying both matrices results in the same aligned output independent of permutation. The overall architecture can be seen in Figure 2. The encoder necessary for training across tasks with different predictor vectors with REPTILE by optimizing Equation (5) is then given as:
enc: Xτ 7−→ Xτ · Φ(Xτ , θenc) = Xτ · Π̂τ (8)
3.3 REORDERING TRAINING
Only joint-training the network ŷ ◦ enc as described above, will not teach CHAMELEON denoted by Φ how to reorder the features to a shared representation. That is why it is necessary to train Φ specifically with the objective of reordering features (reordering training). In order to do so, we optimize Φ to align novel tasks by training on a set of tasks for which the reordering matrix Πτ exists such that it maps τ to the shared representation. In other words, we require a meta-dataset that contains not only a set of similar tasks τ ∈ T with different schema, but also the position for each feature in the shared representation given by a permutation matrix. If Πτ is known beforehand for each τ ∈ T , optimizing Chameleon becomes a simple supervised classification task based on predicting the new position of each feature in τ . Thus, we can minimize the expected reordering loss over the meta-dataset:
θenc = argmin θenc
Eτ∼T LΦ ( Πτ , Π̂τ ) (9)
where LΦ is the softmax cross-entropy loss, Πτ is the ground-truth (one-hot encoding of the new position for each variable), and Π̂τ is the prediction. This training procedure can be seen in Algorithm (2). The trained CHAMELEON model can then be used to compute the Πτ for any unseen task τ ∈ T .
Algorithm 2 Reordering Training Input: Meta-dataset T = {(X1,Π1), ..., (X|T |,Π|T |)}, latent dimension K, learning rate γ
1: Randomly initialize parameters θenc of the CHAMELEON model 2: for training iteration = 1, 2, ... do 3: randomly sample τ ∼ T 4: θenc ←− θenc − γ∇LΦ(Πτ ,Φ(Xτ , θenc)) 5: end for 6: return Trained parameters θenc of the CHAMELEON model
After this training procedure, we can use the learned weights as initialization for Φ before optimizing ŷ ◦ enc with REPTILE without further using LΦ. Experiments show that this procedure improves our results significantly compared to only optimizing the joint meta-loss.
Training the CHAMELEON component to reorder similar tasks to a shared representation not only requires a meta-dataset but one where the true reordering matrix Πτ is provided for every task. In application, this means manually matching similar features of different training tasks so that novel tasks can be matched automatically. However, it is possible to sample a broad number of tasks from a
single dataset by sampling smaller sub-tasks from it, selecting a random subset of features in arbitrary order for N random instances. Thus, it is not necessary to manually match the features since all these sub-tasks share the same Π̂τ apart from the respective permutation of the rows as mentioned above.
4 EXPERIMENTAL RESULTS
Baseline and Setup In order to evaluate the proposed method, we investigate the combined model ŷ ◦ enc with the initialization for enc obtained by pretraining CHAMELEON as defined in Equation 9 before using REPTILE to jointly optimize ŷ ◦ enc. We compare the performance with an initialization obtained by running REPTILE on the base model ŷ by training on tasks padded to a fixed size K as ŷ is not schema invariant. Both initializations are then compared to the performance of model ŷ with random Glorot initialization (Glorot & Bengio, 2010) (referred to as Random). In all of our experiments, we measure the performance of a model and its initialization by evaluating the validation data of a task after performing three update steps on the respective training data. All experiments are conducted in two variants: In Split experiments, test tasks contain novel features in addition to features seen during meta-training. In contrast, test tasks in No-Split experiments only consist of features seen during meta-training. While the Split experiments evaluate the performance of the model when faced with novel features during meta-testing, the No-Split experiments can be used to compare against a perfect alignment by repeating the baseline experiment with tasks that are already aligned (referred to as Oracle). A detailed description of the utilized models is found in Appendix B.
Meta-Datasets For our main experiments, we utilize a single dataset as meta-dataset by sampling the training and test tasks from it. This allows us to evaluate our method on different domains without matching related datasets since Π̂τ is naturally given for a subset of permuted features. Novel features can also be introduced during testing by splitting not only the instances but also the features of a dataset in train and test partition (Split). Training tasks are then sampled by selecting a random subset of the training features in arbitrary order forN instances. Stratified sampling guarantees that test tasks contain both features from train and test while sampling the instances from the test set only. For all experiments, 75% of the instances are used for reordering training of CHAMELEON and joint-training of the full architecture, and 25% for sampling test tasks. For Split experiments, we further impose a train-test split on the features (20% of the features are restricted to the test split). Our work is built on top of REPTILE (Nichol et al., 2018b) but can be used in conjunction with any model-agnostic meta-learning method. We opted to use REPTILE since it does not require second-order derivatives, and the code is publicly available (Nichol et al., 2018a) while also being easy to adapt to our problem.
Main Results We evaluate our approach using the OpenML-CC18 benchmark (Bischl et al., 2017) from which we selected 23 datasets for few-shot classification. The details of all datasets utilized in this work are summarized in Appendix B. The results in Figure 3 display the model performance after performing three update steps on a novel test task to illustrate the faster convergence. The graph shows a clear performance lift when using the proposed architecture after pretraining it to reorder tasks. This demonstrates to the best of our knowledge the first few-shot classification approach, which successfully learns across tasks with varying schemas (contribution 2). Furthermore, in the No-Split results one can see that the performance of the proposed method approaches the Oracle performance, which suggests an ideal feature alignment. When adding novel features during test time (Split) CHAMELEON is still able to outperform the other setups although with a lower margin.
Ablations We visualize the result of pretraining CHAMELEON on the Wine dataset (from OpenMLCC18) in Figure 6 to show that the proposed model is capable of learning the correct alignment between tasks. One can see that the component manages to learn the true feature position in almost all cases. Moreover, this illustration does also show that CHAMELEON can be used to compute the similarity between different features by indicating which pairs are confused most often. For example, features two and four are showing a strong correlation, which is very plausible since they depict the free sulfur dioxide and total sulfur dioxide level of the wine. This demonstrates that our proposed architecture is able to learn an alignment between different feature spaces (contribution 1).
Furthermore, we repeat the experiments on the OpenML-CC18 benchmark in two ablation studies to measure the impact of joint-training and the proposed reordering training (Algorithm 2). First, we do not train CHAMELEON with Equation 9, but only jointly train ŷ ◦ enc with REPTILE to evaluate the influence of adding additional parameters to the network without pretraining it. Secondly, we use REPTILE only to update the initialization for the parameters of ŷ while freezing the pretrained parameters of enc in order to assess the effect of joint-training both network components. These two variants are referred to as Untrain and Frozen. We compare these ablations to our approach by conducting a Wilcoxon signed-rank test (Wilcoxon, 1992) with Holm’s alpha correction (Holm, 1979). The results are displayed in the form of a critical difference diagram (Demšar, 2006; Ismail Fawaz et al., 2019) presented in Figure 4. The diagram shows the ranked performance of each model and whether they are statistically different. The results confirm that our approach leads to statistically significant improvements over the random and REPTILE baselines when pretraining CHAMELEON. Similarly, our approach is also significantly better than jointly training the full architecture without pretraining CHAMELEON (UNTRAIN), confirming that the improvements do not stem from the increased model capacity. Finally, comparing the results to the FROZEN model shows improvements that are not significant, indicating that a near-optimal alignment was already found during pretraining. A detailed overview for all experimental results is given in Appendix C.
Latent Embeddings Experiments Learning to align features is only feasible for unstructured data since this approach would not preserve any structure. However, it is a widespread practice among few-shot classification methods, and computer vision approaches in general, to use a pretrained model to embed image data into a latent space before applying further operations. We can use CHAMELEON to align the latent embeddings of image datasets that are generated with different networks. Thus, it is possible to use latent embeddings for meta-training while evaluating on novel tasks that are not yet embedded in case the embedding network is not available, or the complexity of different datasets requires models with different capacities to extract useful features. We conduct an additional experiment for which we combine two similar image datasets, namely EMNIST-Digits and EMNIST-Letters (Cohen et al., 2017). Similar to the work of Rusu et al. (2019), we train one neural network on each dataset in order to generate similar latent embeddings with different schema, namely 32 and 64 latent features. Afterward, we can sample training tasks from one embedding while
evaluating on tasks sampled from the other one. In the combined experiments, the full training is performed on the EMNIST-Letters dataset, while EMNIST-Digits is used for testing. Splitting the features is not necessary as the train, and test features are coming from different datasets. The results of this experiment are displayed in Figure 5. It shows the accuracy of EMNIST-Digits averaged across 5 runs with 1,600 generated tasks per run during the REPTILE training on EMNIST-Letters for the different model variants. Each test task is evaluated by performing 3 update steps on the training samples and measuring the accuracy of its validation data afterward. One can see that our proposed approach reports a significantly higher accuracy than the REPTILE baseline after performing three update steps on a task (contribution 4). Thus, showing that CHAMELEON is able to transfer knowledge from one dataset to another. Moreover, simply adding CHAMELEON without pretraining it to reorder tasks (Untrain) does not lead to any improvement. This might be sparked by using a CHAMELEON component that has a much lower number of parameters than the base network. Only by adding the reordering-training, the model manages to converge to a suitable initialization. In contrast to our experiments on the OpenML datasets, freezing the weights of CHAMELEON after pretraining also fails to give an improvement, suggesting that the pretraining did not manage to capture the ideal alignment, but enables learning it during joint-training. Our code is available at BLIND-REVIEW.
5 CONCLUSION
In this paper, we presented, to the best of our knowledge, the first approach to tackle few-shot classification for unstructured tasks with different schema. Our model component CHAMELEON is capable of embedding tasks to a common representation by computing a matrix that can reorder the features. For this, we propose a novel pretraining framework that is shown to learn useful permutations across tasks in a supervised fashion without requiring actual labels. In experiments on 23 datasets of the OpenML-CC18 benchmark, our method shows significant improvements even when presented with features not seen during training. Furthermore, by aligning different latent embeddings we demonstrate how a single meta-model can be used to learn across multiple image datasets each embedded with a distinct network.
A APPENDIX - INNER TRAINING
We visualize the inner training for one of the experiments in Figure 7. It shows two exemplary snapshots of the inner test loss when training on a sampled task with the current initialization θinit before meta-learning and after 20,000 meta-epochs. It is compared to the test loss of the model when it is trained on the same task starting with the random initialization. For this experiment, models were trained until convergence. Note that both losses are not identical in meta-epoch 0 because the CHAMELEON component is already pretrained. The snapshots show the expected REPTILE behavior, namely a faster convergence when using the currently learned initialization compared to a random one.
B APPENDIX - EXPERIMENTAL DETAILS
The features of each dataset are normalized between 0 and 1. The Split experiments are limited to the 21 datasets which have more than four features in order to perform a feature split. We sample 10 training and 10 validation instances per label for a new task, and 16 tasks per meta-batch. The number of classes in a task is given by the number of classes of the respective dataset, as shown in Table 1. During the reordering-training phase and the inner updates of reptile, specified in line 6 of Algorithm (1), we use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.0001 and 0.001 respectively. The meta-updates of REPTILE are carried out with a learning rate β of 0.01. The reordering-training phase is run for 4000 epochs. All results reported in this work are averaged over 5 runs.
OpenML-CC18 All experiments on the OpenML-CC18 benchmark are conducted with the same model architecture. The base model ŷ is a feed-forward neural network with two dense hidden layers that have 16 neurons each. CHAMELEON consists of two 1D-convolutions with 8 and 16 filters respectively and a final convolution that maps the task to the feature-length K, as shown in Figure 2. We selected dataasets that have up to 33 features and a minimum number of 90 instances per class. We limited the number of features and model capacity because this work seeks to establish a proof of concept for learning across data with different schemas. In contrast, very high-dimensional data would require tuning a more complex CHAMELEON architecture. The details for each dataset are summarized in Appendix 1. When sampling a task in Split, we sample between 40% and 60% of the respective training features. For test tasks in Split experiments 20% of the features are sampled from the set of test features to evaluate performance on similar tasks with partially novel features. For each
experimental run, the different variants are tested on the same data split, and we sample 1600 test tasks beforehand, while the training tasks are randomly sampled each epoch. All experiments are repeated five times with different instance and, in the case of Split, different feature splits, and the results are averaged.
Latent Embeddings Both networks used for generating the latent embeddings consist of two convolutional and two dense hidden layers with 64 neurons each, but the number of neurons in the output layer is 32 for EMNIST-Digits and 64 for EMNIST-Letters. For these experiments, the CHAMELEON component still has two convolutional layers with 8 and 16 filters, while we use a larger base network with two feed-forward layers with 64 neurons each. All experimental results are averaged over five runs.
C APPENDIX - TABLES WITH EXPERIMENTS RESULTS
The following tables show the detailed results of our experiments on the OpenML-CC18 datasets for Split and NoSplit settings. The tables contain the loss and accuracy for the the base model ŷ trained from a random initialization and with REPTILE, and our proposed model ŷ ◦ enc with the additional ablation studies Untrain and Frozen:
D PROBLEM SETTING: GENERAL MULTI-TASK LEARNING.
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. As elucidated in the previous section, our goal is to construct an encoder that learns to match these predictors and map the features of any task τ ∈ T into a shared latent space RK . enc: X −→ XK , X ∈ RN×F 7−→ X̃ ∈ RN×K (10) This encoder can be combined with a parametric model of fixed input size ŷ : RK → {0, 1} (e.g. neural network or SVM) such that for the joint model ŷ ◦ enc an initialization θinit can be learned via MAML or REPTILE across all tasks, even when those may not have the same predictor vector. Just as with MAML, this initialization facilitates rapid convergence of the combined model ŷ ◦ enc on any new, previously unseen task T ∈ T test. More explicitly, the ultimate goal is to minimize the meta test loss
L (θinit) := ETτ∼T testLτ ( Y testτ , ŷ ◦ enc ( X testτ ; θ (u) τ )) (11)
here Lτ is the task specific loss (e.g. miss-classification rate) of the model on the test data of Tτ , using the updated parameters θ(u)τ . The latter are the updated parameters of the joint model ŷ ◦ enc which are obtained by minimizing Lτ on the training data (X train, Y train) of Tτ via some learning iterative learning algorithm A (e.g. Gradient Descent) for u iterations.
θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , ŷ ◦ enc; θinit ) (12)
MAML and REPTILE are solving sub-problems when the number F of features is fixed and the predictors of all tasks are the same and aligned, i.e., the same predictor always occurs at the same position within the predictor vector, thus the identity can be used as predictor encoder. This problem alternatively can be described as a supervised learning problem with a multivariate or structured target. | 1. What is the main contribution of the paper regarding feature space embedding?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to recover information about input space coordinates?
3. How does the reviewer assess the clarity and coherence of the paper's presentation?
4. What are some of the confusing statements or concepts in the paper, and how could they be improved?
5. How does the paper's few-shot learning vocabulary and techniques differ from the existing literature, and what implications does this have for evaluating the results?
6. What design choices are not addressed clearly in the paper, and how might they impact the alignment module's architecture and performance?
7. What is the value or impact of K, the size of the chosen embedding space, and how might it be evaluated?
8. Should the paper be accepted or rejected, and what would be the basis for either decision? | Review | Review
Summary
The paper proposes a trainable way to re-order or recover the ordering of features from sets of examples, and use it as a way to build a common feature space (or embedding) for a neural net, the (initial) parameters of which can be trained by Reptile. Experiments show that such initial parameters enable faster training (inside of an episode) than untrained weights.
Pros
The paper shows it is possible to recover information about the identity of coordinates in the input space, through a learned transformation, on several unstructured datasets. The similarity between such representations of individual coordinates can help identify similar features, either in a given dataset or across datasets.
Cons
The paper is overall really hard to follow, statements are often confusing or misleading. For instance:
The introduction suggests a multi-modal learning paradigm, where different tasks could have access to data in different input spaces, some of them common. However, the paper then seems to consider individual coordinates in the input space only, and focuses on mapping shuffled subsets of these coordinates back to their initial position.
There is confusion about the "tasks", which sometimes correspond to one of the OpenML datasets, and sometimes to individual few-shot episodes from one of these datasets.
Concepts like "schema" and "predictors" are never properly introduced or defined.
The description of the "chameleon" (alignment) component mentions "order-invariant" and "permutation invariant" several times, but it is quite unclear whether it refers to the the order of the examples within the data set (or episode) or the order in which the features are represented.
The paper uses few-shot learning vocabulary and techniques, including Reptile, but the methodology seems completely different from the few-shot learning literature. In particular:
There does not appear to be a split between meta-training and meta-test classes within a dataset, or meta-training datasets and meta-testing ones, except for the EMNIST experiment. Even then, the pre-training of the "chameleon" alignment module seems to involve using examples of the meta-test classes.
The reported evaluation metric is really unusual: they report the improvement (and sometimes accuracy) after 3 steps of gradient descent from within an episode, which is somewhat related to the quality of the meta-learned weights, but no other metric that would be comparable to existing literature, which makes it especially hard to assess the results.
The principle of the alignment module seems similar to (soft) attention mechanisms, in that there is a softmax trained to highlight which parts of an input vector should be emphasized (or selected) at a given point in the processing (here, in the aligned feature space). However, the literature on attention is not reviewed.
Many design choices are not addressed clearly, neither in how they were made, or the impact of these choices, especially regarding the architecture of the alignment module:
It is a linear transformation (before the softmax), though parameterized by 3 matrices. An alternative would have been a 3-layer neural net, similar to attention networks.
The parameterization of the first matrix makes the number of parameters depend on N, the number of examples in a given task. This could be quite limiting to be restrained to tasks of exactly N examples, especially if both the support (mini-train) and query (mini-test or valid) parts of an episode need to have exactly N examples.
There is also no discussion of the value or impact of or K, the size of the chosen embedding space).
Recommendation
I recommend to reject this submission.
Arguments
The main idea in the paper, learning alignments of various input spaces into a common embedding space through an attention mechanism, has merit and may work reasonably. However, both the algorithm and the experimental set up are described in a quite confused way, and not well justified or grounded. The reported results are not comparable with few-shot learning literature, nor multi-modal training or feature imputation, and do not make a convincing case.
Questions
As I understand it, the "Chameleon" architecture itself simply consists in 3 matrix multiplications (Nx8, 8x16, 16xK), which would be equivalent to the length-1 1D convolutions, is that correct? It may be more straightforward to explain that way, as
e
n
c
(
X
)
=
X
M
1
M
2
M
3
X
T
. Also, should the 2nd and 3rd convolutions be labeled "8x16x1" and "16xKx1" respectively? As far as I can tell, only the first Conv1D should have a dependency on N.
Additional feedback
In Figure 2, the "reshape" operation should be "transpose" instead. |
ICLR | Title
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
Abstract
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. Recent work shows that an initial parameter set can be learned from a population of supervised learning tasks that enables a fast convergence for unseen tasks even when only a handful of instances is available (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type, and semantics of predictor and target variables. In this paper, we address the problem of meta-learning weight initialization across tasks with different schemas, for example, if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. In experiments on 23 datasets of the OpenML-CC18 benchmark, we show that Chameleon can successfully learn parameter initializations across tasks with different schemas, presenting, to the best of our knowledge, the first cross-dataset few-shot classification approach for unstructured data.
1 INTRODUCTION
Humans require only a few examples to correctly classify new instances of previously unknown objects. For example, it is sufficient to see a handful of images of a specific type of dog before being able to classify dogs of this type consistently. In contrast, deep learning models optimized in a classical supervised setup usually require a vast number of training examples to match human performance. A striking difference is that a human has already learned to classify countless other objects, while parameters of a neural network are typically initialized randomly. Previous approaches improved this starting point for gradient-based optimization by choosing a more robust random initialization (He et al., 2015) or by starting from a pretrained network (Pan & Yang, 2010). Still, models do not learn from only a handful of training examples even when applying these techniques. Moreover, established hyperparameter optimization methods (Schilling et al., 2016) are not capable of optimizing the model initialization due to the high-dimensional parameter space. Few-shot classification aims at correctly classifying unseen instances of a novel task with only a few labeled training instances given. This is typically accomplished by meta-learning across a set of training tasks, which consist of training and validation examples with given labels for a set of classes. The field has gained immense popularity among researchers after recent meta-learning approaches have shown that it is possible to learn a weight initialization across different tasks, which facilitates a faster convergence speed and thus enables classifying novel classes after seeing only a few instances (Finn et al., 2018). However, training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order. For that reason, most approaches demonstrate their performance on image data, which can be easily scaled to a fixed shape, whereas transforming unstructured data to a uniform schema is not trivial.
We want to extend popular approaches to operate invariant of schema, i.e., independent of order and shape, making it possible to use meta-learning approaches on unstructured data with varying feature spaces, e.g., learning a model from heart disease data that can accurately classify a few-shot task for diabetes detection that relies on similar features. Thus, we require a schema-invariant encoder that maps heart disease and diabetes data to one feature representation, which then can be used to train a single model via popular meta-learning algorithms like REPTILE (Nichol et al., 2018b).
We propose a set-wise feature transformation model called CHAMELEON, named after a REPTILE capable of adjusting its colors according to the environment in which it is located. CHAMELEON projects different schemas to a fixed input space while keeping features from different tasks but of the same type or distribution in the same position, as illustrated by Figure 1. Our model learns to compute a task-specific reordering matrix that, when multiplied with the original input, aligns the schema of unstructured tasks to a common representation while behaving invariant to the order of input features.
Our main contributions are as follows: (1) We show how our proposed method CHAMELEON can learn to align varying feature spaces to a common representation. (2) We propose the first approach to tackle few-shot classification for tasks with different schemas. (3) In experiments on 23 datasets of the OpenML-CC18 benchmark (Bischl et al., 2017) collection, we demonstrate how current meta-learning approaches can successfully learn a model initialization across tasks with different schemas as long as they share some variables with respect to their type or semantics. (4) Although an alignment makes little sense to be performed on top of structured data such as images which can be easily rescaled, we demonstrate how CHAMELEON can align latent embeddings of two image datasets generated with different neural networks.
2 RELATED WORK
Our goal is to extend recent few-shot classification approaches that make use of optimization-based meta-learning by adding a feature alignment component that casts different inputs to a common schema, presenting the first approach working across tasks with different schema. In this section, we will discuss various works related to our approach.
Research on transfer learning (Pan & Yang, 2010; Sung et al., 2018; Gligic et al., 2020) has shown that training a model on different auxiliary tasks before actually fitting it to the target problem can provide better results if training data is scarce. Motivated by this, few-shot learning approaches try to generalize to novel tasks with unseen classes given only a few instances by first meta-learning across a set of training tasks (Duan et al., 2017; Finn et al., 2017b; Snell et al., 2017). A task τ consists of predictor data Xτ , a target Yτ , a predefined training/test split τ = (X trainτ , Y train τ , X test τ , Y test τ ) and a loss function Lτ . Typically, an N -way K-shot problem refers to a few-shot learning problem where each task consists of N classes with K training samples per class.
Heterogeneous Transfer Learning tries to tackle a similar problem setting as described in this work. In contrast to regular Transfer Learning, the feature spaces of the auxiliary tasks and the actual task differ and are often non-overlapping (Day & Khoshgoftaar, 2017). Many approaches require co-occurence data i.e. instances that can be found in both datasets (Wu et al., 2019; Qi et al., 2011), rely on jointly optimizing separate models for each dataset to propagate information (Zhao & Hoi, 2010; Yan et al., 2016), or utilize meta-features (Feuz & Cook, 2015). Oftentimes, these approaches operate on structured data e.g. images and text with different data distributions for the tasks at hand (Li et al., 2019; He et al., 2019). These datasets can thus be embedded in a shared space with standard models such as convolutional neural networks and transformer-based language models. However, none of these approaches are capable of training a single encoder that operates across a meta-dataset of tasks with different schema for unstructured data.
Early approaches like (Fe-Fei et al., 2003) already investigated the few-shot learning setting by representing prior knowledge as a probability density function. In recent years, various works proposed new model-based meta-learning approaches which rapidly improved the state-of-the-art few-shot learning benchmarks. Most prominently, this includes methods which rely on learning an embedding space for non-parametric metric approaches during inference time (Vinyals et al., 2016; Snell et al., 2017), and approaches which utilize an external memory which stores information about previously seen classes (Santoro et al., 2016; Munkhdalai & Yu, 2017). Several more recent meta-learning approaches have been developed which introduce architectures and parameterization techniques specifically suited for few-shot classification (Mishra et al., 2018; Shi et al., 2019; Wang & Chen, 2020) while others try to extract useful meta-features from datasets to improve hyper-parameter optimization (Jomaa et al., 2019).
In contrast, Finn et al. (2017a) showed that an optimization-based approach, which solely adapts the learning paradigm can be sufficient for learning across tasks. Model Agnostic Meta-Learning (MAML) describes a model initialization algorithm that is capable of training an arbitrary model f across different tasks. Instead of sequentially training the model one task at a time, it uses update steps from different tasks to find a common gradient direction that achieves a fast convergence. In other words, for each meta-learning update, we would need an initial value for the model parameters θ. Then, we sample a batch of tasks T , and for each task τ ∈ T we find an updated version of θ using N examples from the task by performing gradient descent with learning rate α as in: θ′τ ← θ − α∇θLτ (fθ). The final update of θ with step size β will be:
θ ← θ − β 1|T |∇θ ∑ τ Lτ (fθ′τ ) (1)
Finn et al. (2017a) state that MAML does not require learning an update rule (Ravi & Larochelle, 2016), or restricting their model architecture (Santoro et al., 2016). They extended their approach by incorporating a probabilistic component such that for a new task, the model is sampled from a distribution of models to guarantee a higher model diversification for ambiguous tasks (Finn et al., 2018). However, MAML requires to compute second-order derivatives, resulting in a computationally heavy approach. Nichol et al. (2018b) extend upon the first-order approximation given as an ablation by Finn et al. (2018), which numerically approximates Equation (1) by replacing the second derivative with the weights difference, s.t. the update rule used in REPTILE is given by:
θ ← θ − β 1|T | ∑ τ (θ′τ − θ) (2)
which means we can use the difference between the previous and updated version as an approximation of the second-order derivatives to reduce computational cost. The serial version is presented in Algorithm (1).1 All of these approaches rely on a fixed schema, i.e. the same set of features with identical alignment across all tasks. However, many similar datasets only share a subset of their features, while oftentimes having a different order or representation e.g. latent embeddings for two different image datasets generated by training two similar architectures. Most current few-shot classification approaches sample tasks from a single dataset by selecting a random subset of classes; although it is possible to train a single meta-model on two different image datasets as shown by Munkhdalai & Yu (2017) and Tseng et al. (2020) since the images can be scaled to a fixed size. Further research demonstrates that it is possible to learn a single model across different output sizes (Drumond et al., 2020). Recently, a meta-dataset for few-shot classification of image tasks was also published to promote meta-learning across multiple datasets (Triantafillou et al., 2020). Optimizing a single model across various datasets requires a shared feature space. Thus, it is required to align the features which is achieved by simply rescaling all instances in the case of image data which is not trivial for unstructured data. Recent work relies on preprocessing images to a one-dimensional latent embedding with an additional deep neural network. The authors Rusu et al. (2019) train a Wide Residual Network (Zagoruyko & Komodakis, 2016) on the meta-training data of MiniImageNet (Vinyals et al., 2016) to compute latent embeddings of the data which are then used for few-shot classification, demonstrating state-of-the-art results.
Finding a suitable initialization for deep network has long been a focus of machine learning research. Especially the initialization of Glorot & Bengio (2010) and later He et al. (2015) which emphasize
1 Note that REPTILE does not require validation instances during meta-learning.
the importance of a scaled variance that depends on the layer inputs are widely used. Similar findings are also reported by Cao et al. (2019). Recently, Dauphin & Schoenholz (2019) showed that it is possible to learn a suitable initialization by optimizing the norms of the respective weights. So far, none of these methods tried to learn a common initialization across tasks with different schema.
We propose a novel feature alignment component named CHAMELEON, which enables state-of-the-art methods to learn how to work on top of tasks whose feature vector differ not only in their length but also their concrete alignment. Our model shares resemblance with scaled dot-product attention popularized by (Vaswani et al., 2017):
Attention(Q,K, V ) = softmax( QKT√ dK )V (3)
where Q, K and V are matrices describing queries, keys and values, and dK is the dimensionality of the keys such that the softmax computes an attention mask which is then multiplied with the values V . In contrast to this, we pretrain the parametrized model CHAMELEON to compute a soft permutation matrix which can realign features across tasks with varying schema when multiplied with V instead of computing a simple attention mask.
Algorithm 1 REPTILE Nichol et al. (2018b) Input: Meta-dataset T = {(X1, Y1,L1), ..., (X|T |, Y|T |,L|T |)}, learning rate β
1: Randomly initialize parameters θ of model f 2: for iteration = 1, 2, ... do 3: Sample task (Xτ , Yτ ,Lτ ) ∼ T 4: θ′ ← θ 5: for k steps = 1,2,... do 6: θ′ ← θ′ − α∇θ′Lτ (Yτ , f(Xτ ; θ′)) 7: end for 8: θ ← θ − β(θ′ − θ) 9: end for
10: return parameters θ of model f
3 METHODOLOGY
3.1 PROBLEM SETTING
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. The schema of a task τ then describes not only the number and order, but also the semantics of predictor variables {pτ1 , pτ2 , . . . , pτF } in Xtrainτ . Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. Methods like REPTILE and MAML try to find the best initialization for a specific model, in this work referred to as ŷ, to operate on a set T of similar tasks. However, every task τ has to share the same schema of common size K, where similar features shared across tasks are in the same position. A feature-order invariant encoder is needed to map the data representation Xτ of tasks with varying input schema and feature length Fτ to a shared latent representation X̃τ with fixed feature length K:
enc: X −→ XK , Xτ ∈ RN×Fτ 7−→ X̃τ ∈ RN×K (4)
where N represents the number of instances in Xτ , Fτ is the number of features of task τ which varies across tasks, and K is the size of the desired feature space. By combining this encoder with model ŷ that works on a fixed input size K and outputs the predicted target e.g. binary classification, it is possible to apply the REPTILE algorithm to learn an initialization θinit across tasks with different schema. The optimization objective then becomes the meta-loss for the combined network f = ŷ ◦ enc over a set of tasks T :
argmin θinit
Eτ∼T Lτ ( Y testτ , f ( X testτ ; θ (u) τ )) s.t. θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , f ; θ init ) (5)
where θinit is the set of initial weights for the combined network f consisting of enc with parameters θenc and model ŷ with parameters θŷ, and θ (u) τ are the updated weights after applying the learning procedure A for u iterations on the task τ as defined in Algorithm 1 for the inner updates of REPTILE. It is important to mention that learning one weight parameterization across any heterogeneous set of tasks is extremely difficult since it is most likely impossible to find one initialization for two tasks with a vastly different number and type of features. By contrast, if two tasks share similar features, one can align the similar features to a common representation so that a model can directly learn across different tasks by transforming the tasks as illustrated in Figure 1.
3.2 CHAMELEON
Consider a set of tasks where a right stochastic matrix Πτ exists for each task that reorders predictor data Xτ into X̃τ having the same schema for every task τ ∈ T :
X̃τ = Xτ ·Πτ ,where (6) x̃1,1 . . . x̃1,K... . . . ... x̃N,1 . . . x̃N,K ︸ ︷︷ ︸
X̃τ
= x1,1 . . . x1,Fτ... . . . ... xN,1 . . . xN,Fτ ︸ ︷︷ ︸
Xτ
· π1,1 . . . π1,K... . . . ... πFτ ,1 . . . πFτ ,K ︸ ︷︷ ︸
Πτ
Every xm,n represents the feature n of sample m. Every πm,n represent how much of feature m (from samples in Xτ ) should be shifted to position n in the adapted input X̃τ . Finally, every x̃m,n represent the new feature n of sample m in Xτ with the adpated shape and size. In order to achieve the same X̃τ when permuting two features of a task Xτ , we must simply permute the corresponding rows in Πτ to achieve the same X̃τ . Since Πτ is a right stochastic matrix, the summation for every row of Πτ is set to be equal to 1 as in ∑ i πj,i = 1, so that each value in Πτ simply states how much a feature is shifted to a corresponding position. For example: Consider that task a has features [apples, bananas, melons] and task b features [lemons, bananas, apples]. Both can be transformed to the same representation [apples, lemons, bananas, melons] by replacing missing features with zeros and reordering them. This transformation must have the same result for a and b independent of their feature order. In a real life scenario, features might come with different names or sometimes their similarity is not clear to the human eye. Note that a classic autoencoder is not capable of this as it is not invariant to the order of the features. Our proposed component, denoted by Φ, takes a task as
input and outputs the corresponding reordering matrix:
Φ(Xτ , θenc) = Π̂τ (7)
The function Φ is a neural network parameterized by θenc. It consists of three 1D-convolutions, where the last one is the output layer that estimates the alignment matrix via a softmax activation. The input is first transposed to size [Fτ ×N ] (where N is the number of samples) i.e., each feature is represented by a vector of instances. Each convolution has kernel length 1 (as the order of instances is arbitrary and thus needs to be permutation invariant) and a channel output size of 8, 16, and lastly K. The result is a reordering matrix displaying the relation of every original feature to each of the K features in the target space. Each of these vectors passes through a softmax layer, computing the ratio of features in Xτ shifted to each position of X̃τ . Finally, the reordering matrix can be multiplied with the input to compute the aligned task as defined in Equation (6). By using a kernel length of 1 in combination with the final matrix multiplication, the full architecture becomes permutation invariant in the feature dimension. Column-wise permuting the features of an input task leads to the corresponding row-wise permutation of the reordering matrix. Thus, multiplying both matrices results in the same aligned output independent of permutation. The overall architecture can be seen in Figure 2. The encoder necessary for training across tasks with different predictor vectors with REPTILE by optimizing Equation (5) is then given as:
enc: Xτ 7−→ Xτ · Φ(Xτ , θenc) = Xτ · Π̂τ (8)
3.3 REORDERING TRAINING
Only joint-training the network ŷ ◦ enc as described above, will not teach CHAMELEON denoted by Φ how to reorder the features to a shared representation. That is why it is necessary to train Φ specifically with the objective of reordering features (reordering training). In order to do so, we optimize Φ to align novel tasks by training on a set of tasks for which the reordering matrix Πτ exists such that it maps τ to the shared representation. In other words, we require a meta-dataset that contains not only a set of similar tasks τ ∈ T with different schema, but also the position for each feature in the shared representation given by a permutation matrix. If Πτ is known beforehand for each τ ∈ T , optimizing Chameleon becomes a simple supervised classification task based on predicting the new position of each feature in τ . Thus, we can minimize the expected reordering loss over the meta-dataset:
θenc = argmin θenc
Eτ∼T LΦ ( Πτ , Π̂τ ) (9)
where LΦ is the softmax cross-entropy loss, Πτ is the ground-truth (one-hot encoding of the new position for each variable), and Π̂τ is the prediction. This training procedure can be seen in Algorithm (2). The trained CHAMELEON model can then be used to compute the Πτ for any unseen task τ ∈ T .
Algorithm 2 Reordering Training Input: Meta-dataset T = {(X1,Π1), ..., (X|T |,Π|T |)}, latent dimension K, learning rate γ
1: Randomly initialize parameters θenc of the CHAMELEON model 2: for training iteration = 1, 2, ... do 3: randomly sample τ ∼ T 4: θenc ←− θenc − γ∇LΦ(Πτ ,Φ(Xτ , θenc)) 5: end for 6: return Trained parameters θenc of the CHAMELEON model
After this training procedure, we can use the learned weights as initialization for Φ before optimizing ŷ ◦ enc with REPTILE without further using LΦ. Experiments show that this procedure improves our results significantly compared to only optimizing the joint meta-loss.
Training the CHAMELEON component to reorder similar tasks to a shared representation not only requires a meta-dataset but one where the true reordering matrix Πτ is provided for every task. In application, this means manually matching similar features of different training tasks so that novel tasks can be matched automatically. However, it is possible to sample a broad number of tasks from a
single dataset by sampling smaller sub-tasks from it, selecting a random subset of features in arbitrary order for N random instances. Thus, it is not necessary to manually match the features since all these sub-tasks share the same Π̂τ apart from the respective permutation of the rows as mentioned above.
4 EXPERIMENTAL RESULTS
Baseline and Setup In order to evaluate the proposed method, we investigate the combined model ŷ ◦ enc with the initialization for enc obtained by pretraining CHAMELEON as defined in Equation 9 before using REPTILE to jointly optimize ŷ ◦ enc. We compare the performance with an initialization obtained by running REPTILE on the base model ŷ by training on tasks padded to a fixed size K as ŷ is not schema invariant. Both initializations are then compared to the performance of model ŷ with random Glorot initialization (Glorot & Bengio, 2010) (referred to as Random). In all of our experiments, we measure the performance of a model and its initialization by evaluating the validation data of a task after performing three update steps on the respective training data. All experiments are conducted in two variants: In Split experiments, test tasks contain novel features in addition to features seen during meta-training. In contrast, test tasks in No-Split experiments only consist of features seen during meta-training. While the Split experiments evaluate the performance of the model when faced with novel features during meta-testing, the No-Split experiments can be used to compare against a perfect alignment by repeating the baseline experiment with tasks that are already aligned (referred to as Oracle). A detailed description of the utilized models is found in Appendix B.
Meta-Datasets For our main experiments, we utilize a single dataset as meta-dataset by sampling the training and test tasks from it. This allows us to evaluate our method on different domains without matching related datasets since Π̂τ is naturally given for a subset of permuted features. Novel features can also be introduced during testing by splitting not only the instances but also the features of a dataset in train and test partition (Split). Training tasks are then sampled by selecting a random subset of the training features in arbitrary order forN instances. Stratified sampling guarantees that test tasks contain both features from train and test while sampling the instances from the test set only. For all experiments, 75% of the instances are used for reordering training of CHAMELEON and joint-training of the full architecture, and 25% for sampling test tasks. For Split experiments, we further impose a train-test split on the features (20% of the features are restricted to the test split). Our work is built on top of REPTILE (Nichol et al., 2018b) but can be used in conjunction with any model-agnostic meta-learning method. We opted to use REPTILE since it does not require second-order derivatives, and the code is publicly available (Nichol et al., 2018a) while also being easy to adapt to our problem.
Main Results We evaluate our approach using the OpenML-CC18 benchmark (Bischl et al., 2017) from which we selected 23 datasets for few-shot classification. The details of all datasets utilized in this work are summarized in Appendix B. The results in Figure 3 display the model performance after performing three update steps on a novel test task to illustrate the faster convergence. The graph shows a clear performance lift when using the proposed architecture after pretraining it to reorder tasks. This demonstrates to the best of our knowledge the first few-shot classification approach, which successfully learns across tasks with varying schemas (contribution 2). Furthermore, in the No-Split results one can see that the performance of the proposed method approaches the Oracle performance, which suggests an ideal feature alignment. When adding novel features during test time (Split) CHAMELEON is still able to outperform the other setups although with a lower margin.
Ablations We visualize the result of pretraining CHAMELEON on the Wine dataset (from OpenMLCC18) in Figure 6 to show that the proposed model is capable of learning the correct alignment between tasks. One can see that the component manages to learn the true feature position in almost all cases. Moreover, this illustration does also show that CHAMELEON can be used to compute the similarity between different features by indicating which pairs are confused most often. For example, features two and four are showing a strong correlation, which is very plausible since they depict the free sulfur dioxide and total sulfur dioxide level of the wine. This demonstrates that our proposed architecture is able to learn an alignment between different feature spaces (contribution 1).
Furthermore, we repeat the experiments on the OpenML-CC18 benchmark in two ablation studies to measure the impact of joint-training and the proposed reordering training (Algorithm 2). First, we do not train CHAMELEON with Equation 9, but only jointly train ŷ ◦ enc with REPTILE to evaluate the influence of adding additional parameters to the network without pretraining it. Secondly, we use REPTILE only to update the initialization for the parameters of ŷ while freezing the pretrained parameters of enc in order to assess the effect of joint-training both network components. These two variants are referred to as Untrain and Frozen. We compare these ablations to our approach by conducting a Wilcoxon signed-rank test (Wilcoxon, 1992) with Holm’s alpha correction (Holm, 1979). The results are displayed in the form of a critical difference diagram (Demšar, 2006; Ismail Fawaz et al., 2019) presented in Figure 4. The diagram shows the ranked performance of each model and whether they are statistically different. The results confirm that our approach leads to statistically significant improvements over the random and REPTILE baselines when pretraining CHAMELEON. Similarly, our approach is also significantly better than jointly training the full architecture without pretraining CHAMELEON (UNTRAIN), confirming that the improvements do not stem from the increased model capacity. Finally, comparing the results to the FROZEN model shows improvements that are not significant, indicating that a near-optimal alignment was already found during pretraining. A detailed overview for all experimental results is given in Appendix C.
Latent Embeddings Experiments Learning to align features is only feasible for unstructured data since this approach would not preserve any structure. However, it is a widespread practice among few-shot classification methods, and computer vision approaches in general, to use a pretrained model to embed image data into a latent space before applying further operations. We can use CHAMELEON to align the latent embeddings of image datasets that are generated with different networks. Thus, it is possible to use latent embeddings for meta-training while evaluating on novel tasks that are not yet embedded in case the embedding network is not available, or the complexity of different datasets requires models with different capacities to extract useful features. We conduct an additional experiment for which we combine two similar image datasets, namely EMNIST-Digits and EMNIST-Letters (Cohen et al., 2017). Similar to the work of Rusu et al. (2019), we train one neural network on each dataset in order to generate similar latent embeddings with different schema, namely 32 and 64 latent features. Afterward, we can sample training tasks from one embedding while
evaluating on tasks sampled from the other one. In the combined experiments, the full training is performed on the EMNIST-Letters dataset, while EMNIST-Digits is used for testing. Splitting the features is not necessary as the train, and test features are coming from different datasets. The results of this experiment are displayed in Figure 5. It shows the accuracy of EMNIST-Digits averaged across 5 runs with 1,600 generated tasks per run during the REPTILE training on EMNIST-Letters for the different model variants. Each test task is evaluated by performing 3 update steps on the training samples and measuring the accuracy of its validation data afterward. One can see that our proposed approach reports a significantly higher accuracy than the REPTILE baseline after performing three update steps on a task (contribution 4). Thus, showing that CHAMELEON is able to transfer knowledge from one dataset to another. Moreover, simply adding CHAMELEON without pretraining it to reorder tasks (Untrain) does not lead to any improvement. This might be sparked by using a CHAMELEON component that has a much lower number of parameters than the base network. Only by adding the reordering-training, the model manages to converge to a suitable initialization. In contrast to our experiments on the OpenML datasets, freezing the weights of CHAMELEON after pretraining also fails to give an improvement, suggesting that the pretraining did not manage to capture the ideal alignment, but enables learning it during joint-training. Our code is available at BLIND-REVIEW.
5 CONCLUSION
In this paper, we presented, to the best of our knowledge, the first approach to tackle few-shot classification for unstructured tasks with different schema. Our model component CHAMELEON is capable of embedding tasks to a common representation by computing a matrix that can reorder the features. For this, we propose a novel pretraining framework that is shown to learn useful permutations across tasks in a supervised fashion without requiring actual labels. In experiments on 23 datasets of the OpenML-CC18 benchmark, our method shows significant improvements even when presented with features not seen during training. Furthermore, by aligning different latent embeddings we demonstrate how a single meta-model can be used to learn across multiple image datasets each embedded with a distinct network.
A APPENDIX - INNER TRAINING
We visualize the inner training for one of the experiments in Figure 7. It shows two exemplary snapshots of the inner test loss when training on a sampled task with the current initialization θinit before meta-learning and after 20,000 meta-epochs. It is compared to the test loss of the model when it is trained on the same task starting with the random initialization. For this experiment, models were trained until convergence. Note that both losses are not identical in meta-epoch 0 because the CHAMELEON component is already pretrained. The snapshots show the expected REPTILE behavior, namely a faster convergence when using the currently learned initialization compared to a random one.
B APPENDIX - EXPERIMENTAL DETAILS
The features of each dataset are normalized between 0 and 1. The Split experiments are limited to the 21 datasets which have more than four features in order to perform a feature split. We sample 10 training and 10 validation instances per label for a new task, and 16 tasks per meta-batch. The number of classes in a task is given by the number of classes of the respective dataset, as shown in Table 1. During the reordering-training phase and the inner updates of reptile, specified in line 6 of Algorithm (1), we use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.0001 and 0.001 respectively. The meta-updates of REPTILE are carried out with a learning rate β of 0.01. The reordering-training phase is run for 4000 epochs. All results reported in this work are averaged over 5 runs.
OpenML-CC18 All experiments on the OpenML-CC18 benchmark are conducted with the same model architecture. The base model ŷ is a feed-forward neural network with two dense hidden layers that have 16 neurons each. CHAMELEON consists of two 1D-convolutions with 8 and 16 filters respectively and a final convolution that maps the task to the feature-length K, as shown in Figure 2. We selected dataasets that have up to 33 features and a minimum number of 90 instances per class. We limited the number of features and model capacity because this work seeks to establish a proof of concept for learning across data with different schemas. In contrast, very high-dimensional data would require tuning a more complex CHAMELEON architecture. The details for each dataset are summarized in Appendix 1. When sampling a task in Split, we sample between 40% and 60% of the respective training features. For test tasks in Split experiments 20% of the features are sampled from the set of test features to evaluate performance on similar tasks with partially novel features. For each
experimental run, the different variants are tested on the same data split, and we sample 1600 test tasks beforehand, while the training tasks are randomly sampled each epoch. All experiments are repeated five times with different instance and, in the case of Split, different feature splits, and the results are averaged.
Latent Embeddings Both networks used for generating the latent embeddings consist of two convolutional and two dense hidden layers with 64 neurons each, but the number of neurons in the output layer is 32 for EMNIST-Digits and 64 for EMNIST-Letters. For these experiments, the CHAMELEON component still has two convolutional layers with 8 and 16 filters, while we use a larger base network with two feed-forward layers with 64 neurons each. All experimental results are averaged over five runs.
C APPENDIX - TABLES WITH EXPERIMENTS RESULTS
The following tables show the detailed results of our experiments on the OpenML-CC18 datasets for Split and NoSplit settings. The tables contain the loss and accuracy for the the base model ŷ trained from a random initialization and with REPTILE, and our proposed model ŷ ◦ enc with the additional ablation studies Untrain and Frozen:
D PROBLEM SETTING: GENERAL MULTI-TASK LEARNING.
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. As elucidated in the previous section, our goal is to construct an encoder that learns to match these predictors and map the features of any task τ ∈ T into a shared latent space RK . enc: X −→ XK , X ∈ RN×F 7−→ X̃ ∈ RN×K (10) This encoder can be combined with a parametric model of fixed input size ŷ : RK → {0, 1} (e.g. neural network or SVM) such that for the joint model ŷ ◦ enc an initialization θinit can be learned via MAML or REPTILE across all tasks, even when those may not have the same predictor vector. Just as with MAML, this initialization facilitates rapid convergence of the combined model ŷ ◦ enc on any new, previously unseen task T ∈ T test. More explicitly, the ultimate goal is to minimize the meta test loss
L (θinit) := ETτ∼T testLτ ( Y testτ , ŷ ◦ enc ( X testτ ; θ (u) τ )) (11)
here Lτ is the task specific loss (e.g. miss-classification rate) of the model on the test data of Tτ , using the updated parameters θ(u)τ . The latter are the updated parameters of the joint model ŷ ◦ enc which are obtained by minimizing Lτ on the training data (X train, Y train) of Tτ via some learning iterative learning algorithm A (e.g. Gradient Descent) for u iterations.
θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , ŷ ◦ enc; θinit ) (12)
MAML and REPTILE are solving sub-problems when the number F of features is fixed and the predictors of all tasks are the same and aligned, i.e., the same predictor always occurs at the same position within the predictor vector, thus the identity can be used as predictor encoder. This problem alternatively can be described as a supervised learning problem with a multivariate or structured target. | 1. What is the focus of the paper regarding meta-learning tasks?
2. What are the strengths of the proposed approach, particularly in addressing the problem of heterogeneous feature spaces?
3. What are the weaknesses of the paper, especially concerning its technical contribution and empirical results?
4. Do you have any concerns about the feasibility of the proposed method in practical settings?
5. What are some minor issues with the paper's content? | Review | Review
Summary and contributions
In this work, the authors tried to solve the problem of ``heterogeneous'' meta-learning where each task resides in a different feature space from the other tasks. They introduced a feature transformation or re-ordering matrix to align the features. While I agree with the authors that this problem is of significance in the meta-learning community, the solution in this work, depending on the ground-truth of re-ordering matrix, is trivial and impractical.
Strengths:
The problem investigated in this paper, i.e., meta-learning tasks in heterogeneous feature spaces, is important to the field of meta-learning.
The paper is well written and easy to follow.
Weaknesses:
The primary concern about this paper is its technical contribution, being limited and impractical. To align tasks in incommensurable feature spaces, projecting them into a common feature space has been a common practice. Please kindly see related works on heterogeneous transfer learning. The major challenge lies in the supervision needed to train the alignment matrix or function. The ground-truth feature alignment matrix is almost impractical to collect, if the dimension of features is super large and we have no knowledge of the semantic correspondence between two features from two tasks.
The empirical results are also not convincing.
Why is only Glorot initialization compared in Figure 3? What has been widely adopted is some better initialization strategies, including (He initialization).
From both Figure 3 and Figure 4, and also the results in Appendix C, I see little improvement of the proposed over Frozen. This means that most benefits of the feature alignment come from the supervised training part where a ground-truth alignment matrix is required to train
Φ
, while the matrix is even infeasible to have in practical settings.
In Line 6 of the section "Ablations", the authors mentioned that features 2 and 3 are showing a strong correlation, but I cannot see why in Figure 6. Maybe it is features 2 and 4?
Minor: Line 2 in Section 4: Equation (9) does not exist… |
ICLR | Title
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
Abstract
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. Recent work shows that an initial parameter set can be learned from a population of supervised learning tasks that enables a fast convergence for unseen tasks even when only a handful of instances is available (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type, and semantics of predictor and target variables. In this paper, we address the problem of meta-learning weight initialization across tasks with different schemas, for example, if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. In experiments on 23 datasets of the OpenML-CC18 benchmark, we show that Chameleon can successfully learn parameter initializations across tasks with different schemas, presenting, to the best of our knowledge, the first cross-dataset few-shot classification approach for unstructured data.
1 INTRODUCTION
Humans require only a few examples to correctly classify new instances of previously unknown objects. For example, it is sufficient to see a handful of images of a specific type of dog before being able to classify dogs of this type consistently. In contrast, deep learning models optimized in a classical supervised setup usually require a vast number of training examples to match human performance. A striking difference is that a human has already learned to classify countless other objects, while parameters of a neural network are typically initialized randomly. Previous approaches improved this starting point for gradient-based optimization by choosing a more robust random initialization (He et al., 2015) or by starting from a pretrained network (Pan & Yang, 2010). Still, models do not learn from only a handful of training examples even when applying these techniques. Moreover, established hyperparameter optimization methods (Schilling et al., 2016) are not capable of optimizing the model initialization due to the high-dimensional parameter space. Few-shot classification aims at correctly classifying unseen instances of a novel task with only a few labeled training instances given. This is typically accomplished by meta-learning across a set of training tasks, which consist of training and validation examples with given labels for a set of classes. The field has gained immense popularity among researchers after recent meta-learning approaches have shown that it is possible to learn a weight initialization across different tasks, which facilitates a faster convergence speed and thus enables classifying novel classes after seeing only a few instances (Finn et al., 2018). However, training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order. For that reason, most approaches demonstrate their performance on image data, which can be easily scaled to a fixed shape, whereas transforming unstructured data to a uniform schema is not trivial.
We want to extend popular approaches to operate invariant of schema, i.e., independent of order and shape, making it possible to use meta-learning approaches on unstructured data with varying feature spaces, e.g., learning a model from heart disease data that can accurately classify a few-shot task for diabetes detection that relies on similar features. Thus, we require a schema-invariant encoder that maps heart disease and diabetes data to one feature representation, which then can be used to train a single model via popular meta-learning algorithms like REPTILE (Nichol et al., 2018b).
We propose a set-wise feature transformation model called CHAMELEON, named after a REPTILE capable of adjusting its colors according to the environment in which it is located. CHAMELEON projects different schemas to a fixed input space while keeping features from different tasks but of the same type or distribution in the same position, as illustrated by Figure 1. Our model learns to compute a task-specific reordering matrix that, when multiplied with the original input, aligns the schema of unstructured tasks to a common representation while behaving invariant to the order of input features.
Our main contributions are as follows: (1) We show how our proposed method CHAMELEON can learn to align varying feature spaces to a common representation. (2) We propose the first approach to tackle few-shot classification for tasks with different schemas. (3) In experiments on 23 datasets of the OpenML-CC18 benchmark (Bischl et al., 2017) collection, we demonstrate how current meta-learning approaches can successfully learn a model initialization across tasks with different schemas as long as they share some variables with respect to their type or semantics. (4) Although an alignment makes little sense to be performed on top of structured data such as images which can be easily rescaled, we demonstrate how CHAMELEON can align latent embeddings of two image datasets generated with different neural networks.
2 RELATED WORK
Our goal is to extend recent few-shot classification approaches that make use of optimization-based meta-learning by adding a feature alignment component that casts different inputs to a common schema, presenting the first approach working across tasks with different schema. In this section, we will discuss various works related to our approach.
Research on transfer learning (Pan & Yang, 2010; Sung et al., 2018; Gligic et al., 2020) has shown that training a model on different auxiliary tasks before actually fitting it to the target problem can provide better results if training data is scarce. Motivated by this, few-shot learning approaches try to generalize to novel tasks with unseen classes given only a few instances by first meta-learning across a set of training tasks (Duan et al., 2017; Finn et al., 2017b; Snell et al., 2017). A task τ consists of predictor data Xτ , a target Yτ , a predefined training/test split τ = (X trainτ , Y train τ , X test τ , Y test τ ) and a loss function Lτ . Typically, an N -way K-shot problem refers to a few-shot learning problem where each task consists of N classes with K training samples per class.
Heterogeneous Transfer Learning tries to tackle a similar problem setting as described in this work. In contrast to regular Transfer Learning, the feature spaces of the auxiliary tasks and the actual task differ and are often non-overlapping (Day & Khoshgoftaar, 2017). Many approaches require co-occurence data i.e. instances that can be found in both datasets (Wu et al., 2019; Qi et al., 2011), rely on jointly optimizing separate models for each dataset to propagate information (Zhao & Hoi, 2010; Yan et al., 2016), or utilize meta-features (Feuz & Cook, 2015). Oftentimes, these approaches operate on structured data e.g. images and text with different data distributions for the tasks at hand (Li et al., 2019; He et al., 2019). These datasets can thus be embedded in a shared space with standard models such as convolutional neural networks and transformer-based language models. However, none of these approaches are capable of training a single encoder that operates across a meta-dataset of tasks with different schema for unstructured data.
Early approaches like (Fe-Fei et al., 2003) already investigated the few-shot learning setting by representing prior knowledge as a probability density function. In recent years, various works proposed new model-based meta-learning approaches which rapidly improved the state-of-the-art few-shot learning benchmarks. Most prominently, this includes methods which rely on learning an embedding space for non-parametric metric approaches during inference time (Vinyals et al., 2016; Snell et al., 2017), and approaches which utilize an external memory which stores information about previously seen classes (Santoro et al., 2016; Munkhdalai & Yu, 2017). Several more recent meta-learning approaches have been developed which introduce architectures and parameterization techniques specifically suited for few-shot classification (Mishra et al., 2018; Shi et al., 2019; Wang & Chen, 2020) while others try to extract useful meta-features from datasets to improve hyper-parameter optimization (Jomaa et al., 2019).
In contrast, Finn et al. (2017a) showed that an optimization-based approach, which solely adapts the learning paradigm can be sufficient for learning across tasks. Model Agnostic Meta-Learning (MAML) describes a model initialization algorithm that is capable of training an arbitrary model f across different tasks. Instead of sequentially training the model one task at a time, it uses update steps from different tasks to find a common gradient direction that achieves a fast convergence. In other words, for each meta-learning update, we would need an initial value for the model parameters θ. Then, we sample a batch of tasks T , and for each task τ ∈ T we find an updated version of θ using N examples from the task by performing gradient descent with learning rate α as in: θ′τ ← θ − α∇θLτ (fθ). The final update of θ with step size β will be:
θ ← θ − β 1|T |∇θ ∑ τ Lτ (fθ′τ ) (1)
Finn et al. (2017a) state that MAML does not require learning an update rule (Ravi & Larochelle, 2016), or restricting their model architecture (Santoro et al., 2016). They extended their approach by incorporating a probabilistic component such that for a new task, the model is sampled from a distribution of models to guarantee a higher model diversification for ambiguous tasks (Finn et al., 2018). However, MAML requires to compute second-order derivatives, resulting in a computationally heavy approach. Nichol et al. (2018b) extend upon the first-order approximation given as an ablation by Finn et al. (2018), which numerically approximates Equation (1) by replacing the second derivative with the weights difference, s.t. the update rule used in REPTILE is given by:
θ ← θ − β 1|T | ∑ τ (θ′τ − θ) (2)
which means we can use the difference between the previous and updated version as an approximation of the second-order derivatives to reduce computational cost. The serial version is presented in Algorithm (1).1 All of these approaches rely on a fixed schema, i.e. the same set of features with identical alignment across all tasks. However, many similar datasets only share a subset of their features, while oftentimes having a different order or representation e.g. latent embeddings for two different image datasets generated by training two similar architectures. Most current few-shot classification approaches sample tasks from a single dataset by selecting a random subset of classes; although it is possible to train a single meta-model on two different image datasets as shown by Munkhdalai & Yu (2017) and Tseng et al. (2020) since the images can be scaled to a fixed size. Further research demonstrates that it is possible to learn a single model across different output sizes (Drumond et al., 2020). Recently, a meta-dataset for few-shot classification of image tasks was also published to promote meta-learning across multiple datasets (Triantafillou et al., 2020). Optimizing a single model across various datasets requires a shared feature space. Thus, it is required to align the features which is achieved by simply rescaling all instances in the case of image data which is not trivial for unstructured data. Recent work relies on preprocessing images to a one-dimensional latent embedding with an additional deep neural network. The authors Rusu et al. (2019) train a Wide Residual Network (Zagoruyko & Komodakis, 2016) on the meta-training data of MiniImageNet (Vinyals et al., 2016) to compute latent embeddings of the data which are then used for few-shot classification, demonstrating state-of-the-art results.
Finding a suitable initialization for deep network has long been a focus of machine learning research. Especially the initialization of Glorot & Bengio (2010) and later He et al. (2015) which emphasize
1 Note that REPTILE does not require validation instances during meta-learning.
the importance of a scaled variance that depends on the layer inputs are widely used. Similar findings are also reported by Cao et al. (2019). Recently, Dauphin & Schoenholz (2019) showed that it is possible to learn a suitable initialization by optimizing the norms of the respective weights. So far, none of these methods tried to learn a common initialization across tasks with different schema.
We propose a novel feature alignment component named CHAMELEON, which enables state-of-the-art methods to learn how to work on top of tasks whose feature vector differ not only in their length but also their concrete alignment. Our model shares resemblance with scaled dot-product attention popularized by (Vaswani et al., 2017):
Attention(Q,K, V ) = softmax( QKT√ dK )V (3)
where Q, K and V are matrices describing queries, keys and values, and dK is the dimensionality of the keys such that the softmax computes an attention mask which is then multiplied with the values V . In contrast to this, we pretrain the parametrized model CHAMELEON to compute a soft permutation matrix which can realign features across tasks with varying schema when multiplied with V instead of computing a simple attention mask.
Algorithm 1 REPTILE Nichol et al. (2018b) Input: Meta-dataset T = {(X1, Y1,L1), ..., (X|T |, Y|T |,L|T |)}, learning rate β
1: Randomly initialize parameters θ of model f 2: for iteration = 1, 2, ... do 3: Sample task (Xτ , Yτ ,Lτ ) ∼ T 4: θ′ ← θ 5: for k steps = 1,2,... do 6: θ′ ← θ′ − α∇θ′Lτ (Yτ , f(Xτ ; θ′)) 7: end for 8: θ ← θ − β(θ′ − θ) 9: end for
10: return parameters θ of model f
3 METHODOLOGY
3.1 PROBLEM SETTING
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. The schema of a task τ then describes not only the number and order, but also the semantics of predictor variables {pτ1 , pτ2 , . . . , pτF } in Xtrainτ . Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. Methods like REPTILE and MAML try to find the best initialization for a specific model, in this work referred to as ŷ, to operate on a set T of similar tasks. However, every task τ has to share the same schema of common size K, where similar features shared across tasks are in the same position. A feature-order invariant encoder is needed to map the data representation Xτ of tasks with varying input schema and feature length Fτ to a shared latent representation X̃τ with fixed feature length K:
enc: X −→ XK , Xτ ∈ RN×Fτ 7−→ X̃τ ∈ RN×K (4)
where N represents the number of instances in Xτ , Fτ is the number of features of task τ which varies across tasks, and K is the size of the desired feature space. By combining this encoder with model ŷ that works on a fixed input size K and outputs the predicted target e.g. binary classification, it is possible to apply the REPTILE algorithm to learn an initialization θinit across tasks with different schema. The optimization objective then becomes the meta-loss for the combined network f = ŷ ◦ enc over a set of tasks T :
argmin θinit
Eτ∼T Lτ ( Y testτ , f ( X testτ ; θ (u) τ )) s.t. θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , f ; θ init ) (5)
where θinit is the set of initial weights for the combined network f consisting of enc with parameters θenc and model ŷ with parameters θŷ, and θ (u) τ are the updated weights after applying the learning procedure A for u iterations on the task τ as defined in Algorithm 1 for the inner updates of REPTILE. It is important to mention that learning one weight parameterization across any heterogeneous set of tasks is extremely difficult since it is most likely impossible to find one initialization for two tasks with a vastly different number and type of features. By contrast, if two tasks share similar features, one can align the similar features to a common representation so that a model can directly learn across different tasks by transforming the tasks as illustrated in Figure 1.
3.2 CHAMELEON
Consider a set of tasks where a right stochastic matrix Πτ exists for each task that reorders predictor data Xτ into X̃τ having the same schema for every task τ ∈ T :
X̃τ = Xτ ·Πτ ,where (6) x̃1,1 . . . x̃1,K... . . . ... x̃N,1 . . . x̃N,K ︸ ︷︷ ︸
X̃τ
= x1,1 . . . x1,Fτ... . . . ... xN,1 . . . xN,Fτ ︸ ︷︷ ︸
Xτ
· π1,1 . . . π1,K... . . . ... πFτ ,1 . . . πFτ ,K ︸ ︷︷ ︸
Πτ
Every xm,n represents the feature n of sample m. Every πm,n represent how much of feature m (from samples in Xτ ) should be shifted to position n in the adapted input X̃τ . Finally, every x̃m,n represent the new feature n of sample m in Xτ with the adpated shape and size. In order to achieve the same X̃τ when permuting two features of a task Xτ , we must simply permute the corresponding rows in Πτ to achieve the same X̃τ . Since Πτ is a right stochastic matrix, the summation for every row of Πτ is set to be equal to 1 as in ∑ i πj,i = 1, so that each value in Πτ simply states how much a feature is shifted to a corresponding position. For example: Consider that task a has features [apples, bananas, melons] and task b features [lemons, bananas, apples]. Both can be transformed to the same representation [apples, lemons, bananas, melons] by replacing missing features with zeros and reordering them. This transformation must have the same result for a and b independent of their feature order. In a real life scenario, features might come with different names or sometimes their similarity is not clear to the human eye. Note that a classic autoencoder is not capable of this as it is not invariant to the order of the features. Our proposed component, denoted by Φ, takes a task as
input and outputs the corresponding reordering matrix:
Φ(Xτ , θenc) = Π̂τ (7)
The function Φ is a neural network parameterized by θenc. It consists of three 1D-convolutions, where the last one is the output layer that estimates the alignment matrix via a softmax activation. The input is first transposed to size [Fτ ×N ] (where N is the number of samples) i.e., each feature is represented by a vector of instances. Each convolution has kernel length 1 (as the order of instances is arbitrary and thus needs to be permutation invariant) and a channel output size of 8, 16, and lastly K. The result is a reordering matrix displaying the relation of every original feature to each of the K features in the target space. Each of these vectors passes through a softmax layer, computing the ratio of features in Xτ shifted to each position of X̃τ . Finally, the reordering matrix can be multiplied with the input to compute the aligned task as defined in Equation (6). By using a kernel length of 1 in combination with the final matrix multiplication, the full architecture becomes permutation invariant in the feature dimension. Column-wise permuting the features of an input task leads to the corresponding row-wise permutation of the reordering matrix. Thus, multiplying both matrices results in the same aligned output independent of permutation. The overall architecture can be seen in Figure 2. The encoder necessary for training across tasks with different predictor vectors with REPTILE by optimizing Equation (5) is then given as:
enc: Xτ 7−→ Xτ · Φ(Xτ , θenc) = Xτ · Π̂τ (8)
3.3 REORDERING TRAINING
Only joint-training the network ŷ ◦ enc as described above, will not teach CHAMELEON denoted by Φ how to reorder the features to a shared representation. That is why it is necessary to train Φ specifically with the objective of reordering features (reordering training). In order to do so, we optimize Φ to align novel tasks by training on a set of tasks for which the reordering matrix Πτ exists such that it maps τ to the shared representation. In other words, we require a meta-dataset that contains not only a set of similar tasks τ ∈ T with different schema, but also the position for each feature in the shared representation given by a permutation matrix. If Πτ is known beforehand for each τ ∈ T , optimizing Chameleon becomes a simple supervised classification task based on predicting the new position of each feature in τ . Thus, we can minimize the expected reordering loss over the meta-dataset:
θenc = argmin θenc
Eτ∼T LΦ ( Πτ , Π̂τ ) (9)
where LΦ is the softmax cross-entropy loss, Πτ is the ground-truth (one-hot encoding of the new position for each variable), and Π̂τ is the prediction. This training procedure can be seen in Algorithm (2). The trained CHAMELEON model can then be used to compute the Πτ for any unseen task τ ∈ T .
Algorithm 2 Reordering Training Input: Meta-dataset T = {(X1,Π1), ..., (X|T |,Π|T |)}, latent dimension K, learning rate γ
1: Randomly initialize parameters θenc of the CHAMELEON model 2: for training iteration = 1, 2, ... do 3: randomly sample τ ∼ T 4: θenc ←− θenc − γ∇LΦ(Πτ ,Φ(Xτ , θenc)) 5: end for 6: return Trained parameters θenc of the CHAMELEON model
After this training procedure, we can use the learned weights as initialization for Φ before optimizing ŷ ◦ enc with REPTILE without further using LΦ. Experiments show that this procedure improves our results significantly compared to only optimizing the joint meta-loss.
Training the CHAMELEON component to reorder similar tasks to a shared representation not only requires a meta-dataset but one where the true reordering matrix Πτ is provided for every task. In application, this means manually matching similar features of different training tasks so that novel tasks can be matched automatically. However, it is possible to sample a broad number of tasks from a
single dataset by sampling smaller sub-tasks from it, selecting a random subset of features in arbitrary order for N random instances. Thus, it is not necessary to manually match the features since all these sub-tasks share the same Π̂τ apart from the respective permutation of the rows as mentioned above.
4 EXPERIMENTAL RESULTS
Baseline and Setup In order to evaluate the proposed method, we investigate the combined model ŷ ◦ enc with the initialization for enc obtained by pretraining CHAMELEON as defined in Equation 9 before using REPTILE to jointly optimize ŷ ◦ enc. We compare the performance with an initialization obtained by running REPTILE on the base model ŷ by training on tasks padded to a fixed size K as ŷ is not schema invariant. Both initializations are then compared to the performance of model ŷ with random Glorot initialization (Glorot & Bengio, 2010) (referred to as Random). In all of our experiments, we measure the performance of a model and its initialization by evaluating the validation data of a task after performing three update steps on the respective training data. All experiments are conducted in two variants: In Split experiments, test tasks contain novel features in addition to features seen during meta-training. In contrast, test tasks in No-Split experiments only consist of features seen during meta-training. While the Split experiments evaluate the performance of the model when faced with novel features during meta-testing, the No-Split experiments can be used to compare against a perfect alignment by repeating the baseline experiment with tasks that are already aligned (referred to as Oracle). A detailed description of the utilized models is found in Appendix B.
Meta-Datasets For our main experiments, we utilize a single dataset as meta-dataset by sampling the training and test tasks from it. This allows us to evaluate our method on different domains without matching related datasets since Π̂τ is naturally given for a subset of permuted features. Novel features can also be introduced during testing by splitting not only the instances but also the features of a dataset in train and test partition (Split). Training tasks are then sampled by selecting a random subset of the training features in arbitrary order forN instances. Stratified sampling guarantees that test tasks contain both features from train and test while sampling the instances from the test set only. For all experiments, 75% of the instances are used for reordering training of CHAMELEON and joint-training of the full architecture, and 25% for sampling test tasks. For Split experiments, we further impose a train-test split on the features (20% of the features are restricted to the test split). Our work is built on top of REPTILE (Nichol et al., 2018b) but can be used in conjunction with any model-agnostic meta-learning method. We opted to use REPTILE since it does not require second-order derivatives, and the code is publicly available (Nichol et al., 2018a) while also being easy to adapt to our problem.
Main Results We evaluate our approach using the OpenML-CC18 benchmark (Bischl et al., 2017) from which we selected 23 datasets for few-shot classification. The details of all datasets utilized in this work are summarized in Appendix B. The results in Figure 3 display the model performance after performing three update steps on a novel test task to illustrate the faster convergence. The graph shows a clear performance lift when using the proposed architecture after pretraining it to reorder tasks. This demonstrates to the best of our knowledge the first few-shot classification approach, which successfully learns across tasks with varying schemas (contribution 2). Furthermore, in the No-Split results one can see that the performance of the proposed method approaches the Oracle performance, which suggests an ideal feature alignment. When adding novel features during test time (Split) CHAMELEON is still able to outperform the other setups although with a lower margin.
Ablations We visualize the result of pretraining CHAMELEON on the Wine dataset (from OpenMLCC18) in Figure 6 to show that the proposed model is capable of learning the correct alignment between tasks. One can see that the component manages to learn the true feature position in almost all cases. Moreover, this illustration does also show that CHAMELEON can be used to compute the similarity between different features by indicating which pairs are confused most often. For example, features two and four are showing a strong correlation, which is very plausible since they depict the free sulfur dioxide and total sulfur dioxide level of the wine. This demonstrates that our proposed architecture is able to learn an alignment between different feature spaces (contribution 1).
Furthermore, we repeat the experiments on the OpenML-CC18 benchmark in two ablation studies to measure the impact of joint-training and the proposed reordering training (Algorithm 2). First, we do not train CHAMELEON with Equation 9, but only jointly train ŷ ◦ enc with REPTILE to evaluate the influence of adding additional parameters to the network without pretraining it. Secondly, we use REPTILE only to update the initialization for the parameters of ŷ while freezing the pretrained parameters of enc in order to assess the effect of joint-training both network components. These two variants are referred to as Untrain and Frozen. We compare these ablations to our approach by conducting a Wilcoxon signed-rank test (Wilcoxon, 1992) with Holm’s alpha correction (Holm, 1979). The results are displayed in the form of a critical difference diagram (Demšar, 2006; Ismail Fawaz et al., 2019) presented in Figure 4. The diagram shows the ranked performance of each model and whether they are statistically different. The results confirm that our approach leads to statistically significant improvements over the random and REPTILE baselines when pretraining CHAMELEON. Similarly, our approach is also significantly better than jointly training the full architecture without pretraining CHAMELEON (UNTRAIN), confirming that the improvements do not stem from the increased model capacity. Finally, comparing the results to the FROZEN model shows improvements that are not significant, indicating that a near-optimal alignment was already found during pretraining. A detailed overview for all experimental results is given in Appendix C.
Latent Embeddings Experiments Learning to align features is only feasible for unstructured data since this approach would not preserve any structure. However, it is a widespread practice among few-shot classification methods, and computer vision approaches in general, to use a pretrained model to embed image data into a latent space before applying further operations. We can use CHAMELEON to align the latent embeddings of image datasets that are generated with different networks. Thus, it is possible to use latent embeddings for meta-training while evaluating on novel tasks that are not yet embedded in case the embedding network is not available, or the complexity of different datasets requires models with different capacities to extract useful features. We conduct an additional experiment for which we combine two similar image datasets, namely EMNIST-Digits and EMNIST-Letters (Cohen et al., 2017). Similar to the work of Rusu et al. (2019), we train one neural network on each dataset in order to generate similar latent embeddings with different schema, namely 32 and 64 latent features. Afterward, we can sample training tasks from one embedding while
evaluating on tasks sampled from the other one. In the combined experiments, the full training is performed on the EMNIST-Letters dataset, while EMNIST-Digits is used for testing. Splitting the features is not necessary as the train, and test features are coming from different datasets. The results of this experiment are displayed in Figure 5. It shows the accuracy of EMNIST-Digits averaged across 5 runs with 1,600 generated tasks per run during the REPTILE training on EMNIST-Letters for the different model variants. Each test task is evaluated by performing 3 update steps on the training samples and measuring the accuracy of its validation data afterward. One can see that our proposed approach reports a significantly higher accuracy than the REPTILE baseline after performing three update steps on a task (contribution 4). Thus, showing that CHAMELEON is able to transfer knowledge from one dataset to another. Moreover, simply adding CHAMELEON without pretraining it to reorder tasks (Untrain) does not lead to any improvement. This might be sparked by using a CHAMELEON component that has a much lower number of parameters than the base network. Only by adding the reordering-training, the model manages to converge to a suitable initialization. In contrast to our experiments on the OpenML datasets, freezing the weights of CHAMELEON after pretraining also fails to give an improvement, suggesting that the pretraining did not manage to capture the ideal alignment, but enables learning it during joint-training. Our code is available at BLIND-REVIEW.
5 CONCLUSION
In this paper, we presented, to the best of our knowledge, the first approach to tackle few-shot classification for unstructured tasks with different schema. Our model component CHAMELEON is capable of embedding tasks to a common representation by computing a matrix that can reorder the features. For this, we propose a novel pretraining framework that is shown to learn useful permutations across tasks in a supervised fashion without requiring actual labels. In experiments on 23 datasets of the OpenML-CC18 benchmark, our method shows significant improvements even when presented with features not seen during training. Furthermore, by aligning different latent embeddings we demonstrate how a single meta-model can be used to learn across multiple image datasets each embedded with a distinct network.
A APPENDIX - INNER TRAINING
We visualize the inner training for one of the experiments in Figure 7. It shows two exemplary snapshots of the inner test loss when training on a sampled task with the current initialization θinit before meta-learning and after 20,000 meta-epochs. It is compared to the test loss of the model when it is trained on the same task starting with the random initialization. For this experiment, models were trained until convergence. Note that both losses are not identical in meta-epoch 0 because the CHAMELEON component is already pretrained. The snapshots show the expected REPTILE behavior, namely a faster convergence when using the currently learned initialization compared to a random one.
B APPENDIX - EXPERIMENTAL DETAILS
The features of each dataset are normalized between 0 and 1. The Split experiments are limited to the 21 datasets which have more than four features in order to perform a feature split. We sample 10 training and 10 validation instances per label for a new task, and 16 tasks per meta-batch. The number of classes in a task is given by the number of classes of the respective dataset, as shown in Table 1. During the reordering-training phase and the inner updates of reptile, specified in line 6 of Algorithm (1), we use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.0001 and 0.001 respectively. The meta-updates of REPTILE are carried out with a learning rate β of 0.01. The reordering-training phase is run for 4000 epochs. All results reported in this work are averaged over 5 runs.
OpenML-CC18 All experiments on the OpenML-CC18 benchmark are conducted with the same model architecture. The base model ŷ is a feed-forward neural network with two dense hidden layers that have 16 neurons each. CHAMELEON consists of two 1D-convolutions with 8 and 16 filters respectively and a final convolution that maps the task to the feature-length K, as shown in Figure 2. We selected dataasets that have up to 33 features and a minimum number of 90 instances per class. We limited the number of features and model capacity because this work seeks to establish a proof of concept for learning across data with different schemas. In contrast, very high-dimensional data would require tuning a more complex CHAMELEON architecture. The details for each dataset are summarized in Appendix 1. When sampling a task in Split, we sample between 40% and 60% of the respective training features. For test tasks in Split experiments 20% of the features are sampled from the set of test features to evaluate performance on similar tasks with partially novel features. For each
experimental run, the different variants are tested on the same data split, and we sample 1600 test tasks beforehand, while the training tasks are randomly sampled each epoch. All experiments are repeated five times with different instance and, in the case of Split, different feature splits, and the results are averaged.
Latent Embeddings Both networks used for generating the latent embeddings consist of two convolutional and two dense hidden layers with 64 neurons each, but the number of neurons in the output layer is 32 for EMNIST-Digits and 64 for EMNIST-Letters. For these experiments, the CHAMELEON component still has two convolutional layers with 8 and 16 filters, while we use a larger base network with two feed-forward layers with 64 neurons each. All experimental results are averaged over five runs.
C APPENDIX - TABLES WITH EXPERIMENTS RESULTS
The following tables show the detailed results of our experiments on the OpenML-CC18 datasets for Split and NoSplit settings. The tables contain the loss and accuracy for the the base model ŷ trained from a random initialization and with REPTILE, and our proposed model ŷ ◦ enc with the additional ablation studies Untrain and Frozen:
D PROBLEM SETTING: GENERAL MULTI-TASK LEARNING.
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. As elucidated in the previous section, our goal is to construct an encoder that learns to match these predictors and map the features of any task τ ∈ T into a shared latent space RK . enc: X −→ XK , X ∈ RN×F 7−→ X̃ ∈ RN×K (10) This encoder can be combined with a parametric model of fixed input size ŷ : RK → {0, 1} (e.g. neural network or SVM) such that for the joint model ŷ ◦ enc an initialization θinit can be learned via MAML or REPTILE across all tasks, even when those may not have the same predictor vector. Just as with MAML, this initialization facilitates rapid convergence of the combined model ŷ ◦ enc on any new, previously unseen task T ∈ T test. More explicitly, the ultimate goal is to minimize the meta test loss
L (θinit) := ETτ∼T testLτ ( Y testτ , ŷ ◦ enc ( X testτ ; θ (u) τ )) (11)
here Lτ is the task specific loss (e.g. miss-classification rate) of the model on the test data of Tτ , using the updated parameters θ(u)τ . The latter are the updated parameters of the joint model ŷ ◦ enc which are obtained by minimizing Lτ on the training data (X train, Y train) of Tτ via some learning iterative learning algorithm A (e.g. Gradient Descent) for u iterations.
θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , ŷ ◦ enc; θinit ) (12)
MAML and REPTILE are solving sub-problems when the number F of features is fixed and the predictors of all tasks are the same and aligned, i.e., the same predictor always occurs at the same position within the predictor vector, thus the identity can be used as predictor encoder. This problem alternatively can be described as a supervised learning problem with a multivariate or structured target. | 1. What is the main contribution of the paper in the field of few-shot classification?
2. How does the proposed approach handle different predictor schemas?
3. What are the strengths of the paper regarding its experimental results and analysis?
4. Are there any limitations or potential improvements regarding the approach's adaptability to multi-label tasks?
5. Are there any additional analyses or discussions that could enhance the paper's findings? | Review | Review
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
The paper provides an interesting direction in the few-shot classification field. In particular, it proposes a model that learns to align different predictor schemas to a common representation. The paper also demonstrates how current meta-learning approaches can successfully learn a model initialisation across tasks with different schemas as long as they share some variables with respect to their type or semantics.
The paper takes on an interesting facet of few-shot classification: An encoder model that aligns to different predictor schemas to a common representation. It tackles the problem by using 1D convolution (three of them) to transform the input features to the K-features target space and learning the alignment from the data itself. Comprehensive experiments have been done with quantitative results and analysis, to show the effectiveness of the proposed approach and the results are convincing and the code is provided to determine the reproducibility of the results.
Overall performance is quite good however, it would be a good study to have an analysis of the different datasets as to how balanced/unbalanced they are, how it affects the performance, the nature of the features etc. Also, I would like the author to discuss how suitable/adaptable this approach will be for multi-label tasks and what kind of modifications (if any) are to be made.
The idea of encoding different predictor schemas to a common representation is quite interesting and comprehensive experiments and supporting ablation study has been made. |
ICLR | Title
Chameleon: Learning Model Initializations Across Tasks With Different Schemas
Abstract
Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. Recent work shows that an initial parameter set can be learned from a population of supervised learning tasks that enables a fast convergence for unseen tasks even when only a handful of instances is available (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type, and semantics of predictor and target variables. In this paper, we address the problem of meta-learning weight initialization across tasks with different schemas, for example, if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. In experiments on 23 datasets of the OpenML-CC18 benchmark, we show that Chameleon can successfully learn parameter initializations across tasks with different schemas, presenting, to the best of our knowledge, the first cross-dataset few-shot classification approach for unstructured data.
1 INTRODUCTION
Humans require only a few examples to correctly classify new instances of previously unknown objects. For example, it is sufficient to see a handful of images of a specific type of dog before being able to classify dogs of this type consistently. In contrast, deep learning models optimized in a classical supervised setup usually require a vast number of training examples to match human performance. A striking difference is that a human has already learned to classify countless other objects, while parameters of a neural network are typically initialized randomly. Previous approaches improved this starting point for gradient-based optimization by choosing a more robust random initialization (He et al., 2015) or by starting from a pretrained network (Pan & Yang, 2010). Still, models do not learn from only a handful of training examples even when applying these techniques. Moreover, established hyperparameter optimization methods (Schilling et al., 2016) are not capable of optimizing the model initialization due to the high-dimensional parameter space. Few-shot classification aims at correctly classifying unseen instances of a novel task with only a few labeled training instances given. This is typically accomplished by meta-learning across a set of training tasks, which consist of training and validation examples with given labels for a set of classes. The field has gained immense popularity among researchers after recent meta-learning approaches have shown that it is possible to learn a weight initialization across different tasks, which facilitates a faster convergence speed and thus enables classifying novel classes after seeing only a few instances (Finn et al., 2018). However, training a single model across different tasks is only feasible if all tasks share the same schema, meaning that all instances share one set of features in identical order. For that reason, most approaches demonstrate their performance on image data, which can be easily scaled to a fixed shape, whereas transforming unstructured data to a uniform schema is not trivial.
We want to extend popular approaches to operate invariant of schema, i.e., independent of order and shape, making it possible to use meta-learning approaches on unstructured data with varying feature spaces, e.g., learning a model from heart disease data that can accurately classify a few-shot task for diabetes detection that relies on similar features. Thus, we require a schema-invariant encoder that maps heart disease and diabetes data to one feature representation, which then can be used to train a single model via popular meta-learning algorithms like REPTILE (Nichol et al., 2018b).
We propose a set-wise feature transformation model called CHAMELEON, named after a REPTILE capable of adjusting its colors according to the environment in which it is located. CHAMELEON projects different schemas to a fixed input space while keeping features from different tasks but of the same type or distribution in the same position, as illustrated by Figure 1. Our model learns to compute a task-specific reordering matrix that, when multiplied with the original input, aligns the schema of unstructured tasks to a common representation while behaving invariant to the order of input features.
Our main contributions are as follows: (1) We show how our proposed method CHAMELEON can learn to align varying feature spaces to a common representation. (2) We propose the first approach to tackle few-shot classification for tasks with different schemas. (3) In experiments on 23 datasets of the OpenML-CC18 benchmark (Bischl et al., 2017) collection, we demonstrate how current meta-learning approaches can successfully learn a model initialization across tasks with different schemas as long as they share some variables with respect to their type or semantics. (4) Although an alignment makes little sense to be performed on top of structured data such as images which can be easily rescaled, we demonstrate how CHAMELEON can align latent embeddings of two image datasets generated with different neural networks.
2 RELATED WORK
Our goal is to extend recent few-shot classification approaches that make use of optimization-based meta-learning by adding a feature alignment component that casts different inputs to a common schema, presenting the first approach working across tasks with different schema. In this section, we will discuss various works related to our approach.
Research on transfer learning (Pan & Yang, 2010; Sung et al., 2018; Gligic et al., 2020) has shown that training a model on different auxiliary tasks before actually fitting it to the target problem can provide better results if training data is scarce. Motivated by this, few-shot learning approaches try to generalize to novel tasks with unseen classes given only a few instances by first meta-learning across a set of training tasks (Duan et al., 2017; Finn et al., 2017b; Snell et al., 2017). A task τ consists of predictor data Xτ , a target Yτ , a predefined training/test split τ = (X trainτ , Y train τ , X test τ , Y test τ ) and a loss function Lτ . Typically, an N -way K-shot problem refers to a few-shot learning problem where each task consists of N classes with K training samples per class.
Heterogeneous Transfer Learning tries to tackle a similar problem setting as described in this work. In contrast to regular Transfer Learning, the feature spaces of the auxiliary tasks and the actual task differ and are often non-overlapping (Day & Khoshgoftaar, 2017). Many approaches require co-occurence data i.e. instances that can be found in both datasets (Wu et al., 2019; Qi et al., 2011), rely on jointly optimizing separate models for each dataset to propagate information (Zhao & Hoi, 2010; Yan et al., 2016), or utilize meta-features (Feuz & Cook, 2015). Oftentimes, these approaches operate on structured data e.g. images and text with different data distributions for the tasks at hand (Li et al., 2019; He et al., 2019). These datasets can thus be embedded in a shared space with standard models such as convolutional neural networks and transformer-based language models. However, none of these approaches are capable of training a single encoder that operates across a meta-dataset of tasks with different schema for unstructured data.
Early approaches like (Fe-Fei et al., 2003) already investigated the few-shot learning setting by representing prior knowledge as a probability density function. In recent years, various works proposed new model-based meta-learning approaches which rapidly improved the state-of-the-art few-shot learning benchmarks. Most prominently, this includes methods which rely on learning an embedding space for non-parametric metric approaches during inference time (Vinyals et al., 2016; Snell et al., 2017), and approaches which utilize an external memory which stores information about previously seen classes (Santoro et al., 2016; Munkhdalai & Yu, 2017). Several more recent meta-learning approaches have been developed which introduce architectures and parameterization techniques specifically suited for few-shot classification (Mishra et al., 2018; Shi et al., 2019; Wang & Chen, 2020) while others try to extract useful meta-features from datasets to improve hyper-parameter optimization (Jomaa et al., 2019).
In contrast, Finn et al. (2017a) showed that an optimization-based approach, which solely adapts the learning paradigm can be sufficient for learning across tasks. Model Agnostic Meta-Learning (MAML) describes a model initialization algorithm that is capable of training an arbitrary model f across different tasks. Instead of sequentially training the model one task at a time, it uses update steps from different tasks to find a common gradient direction that achieves a fast convergence. In other words, for each meta-learning update, we would need an initial value for the model parameters θ. Then, we sample a batch of tasks T , and for each task τ ∈ T we find an updated version of θ using N examples from the task by performing gradient descent with learning rate α as in: θ′τ ← θ − α∇θLτ (fθ). The final update of θ with step size β will be:
θ ← θ − β 1|T |∇θ ∑ τ Lτ (fθ′τ ) (1)
Finn et al. (2017a) state that MAML does not require learning an update rule (Ravi & Larochelle, 2016), or restricting their model architecture (Santoro et al., 2016). They extended their approach by incorporating a probabilistic component such that for a new task, the model is sampled from a distribution of models to guarantee a higher model diversification for ambiguous tasks (Finn et al., 2018). However, MAML requires to compute second-order derivatives, resulting in a computationally heavy approach. Nichol et al. (2018b) extend upon the first-order approximation given as an ablation by Finn et al. (2018), which numerically approximates Equation (1) by replacing the second derivative with the weights difference, s.t. the update rule used in REPTILE is given by:
θ ← θ − β 1|T | ∑ τ (θ′τ − θ) (2)
which means we can use the difference between the previous and updated version as an approximation of the second-order derivatives to reduce computational cost. The serial version is presented in Algorithm (1).1 All of these approaches rely on a fixed schema, i.e. the same set of features with identical alignment across all tasks. However, many similar datasets only share a subset of their features, while oftentimes having a different order or representation e.g. latent embeddings for two different image datasets generated by training two similar architectures. Most current few-shot classification approaches sample tasks from a single dataset by selecting a random subset of classes; although it is possible to train a single meta-model on two different image datasets as shown by Munkhdalai & Yu (2017) and Tseng et al. (2020) since the images can be scaled to a fixed size. Further research demonstrates that it is possible to learn a single model across different output sizes (Drumond et al., 2020). Recently, a meta-dataset for few-shot classification of image tasks was also published to promote meta-learning across multiple datasets (Triantafillou et al., 2020). Optimizing a single model across various datasets requires a shared feature space. Thus, it is required to align the features which is achieved by simply rescaling all instances in the case of image data which is not trivial for unstructured data. Recent work relies on preprocessing images to a one-dimensional latent embedding with an additional deep neural network. The authors Rusu et al. (2019) train a Wide Residual Network (Zagoruyko & Komodakis, 2016) on the meta-training data of MiniImageNet (Vinyals et al., 2016) to compute latent embeddings of the data which are then used for few-shot classification, demonstrating state-of-the-art results.
Finding a suitable initialization for deep network has long been a focus of machine learning research. Especially the initialization of Glorot & Bengio (2010) and later He et al. (2015) which emphasize
1 Note that REPTILE does not require validation instances during meta-learning.
the importance of a scaled variance that depends on the layer inputs are widely used. Similar findings are also reported by Cao et al. (2019). Recently, Dauphin & Schoenholz (2019) showed that it is possible to learn a suitable initialization by optimizing the norms of the respective weights. So far, none of these methods tried to learn a common initialization across tasks with different schema.
We propose a novel feature alignment component named CHAMELEON, which enables state-of-the-art methods to learn how to work on top of tasks whose feature vector differ not only in their length but also their concrete alignment. Our model shares resemblance with scaled dot-product attention popularized by (Vaswani et al., 2017):
Attention(Q,K, V ) = softmax( QKT√ dK )V (3)
where Q, K and V are matrices describing queries, keys and values, and dK is the dimensionality of the keys such that the softmax computes an attention mask which is then multiplied with the values V . In contrast to this, we pretrain the parametrized model CHAMELEON to compute a soft permutation matrix which can realign features across tasks with varying schema when multiplied with V instead of computing a simple attention mask.
Algorithm 1 REPTILE Nichol et al. (2018b) Input: Meta-dataset T = {(X1, Y1,L1), ..., (X|T |, Y|T |,L|T |)}, learning rate β
1: Randomly initialize parameters θ of model f 2: for iteration = 1, 2, ... do 3: Sample task (Xτ , Yτ ,Lτ ) ∼ T 4: θ′ ← θ 5: for k steps = 1,2,... do 6: θ′ ← θ′ − α∇θ′Lτ (Yτ , f(Xτ ; θ′)) 7: end for 8: θ ← θ − β(θ′ − θ) 9: end for
10: return parameters θ of model f
3 METHODOLOGY
3.1 PROBLEM SETTING
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. The schema of a task τ then describes not only the number and order, but also the semantics of predictor variables {pτ1 , pτ2 , . . . , pτF } in Xtrainτ . Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. Methods like REPTILE and MAML try to find the best initialization for a specific model, in this work referred to as ŷ, to operate on a set T of similar tasks. However, every task τ has to share the same schema of common size K, where similar features shared across tasks are in the same position. A feature-order invariant encoder is needed to map the data representation Xτ of tasks with varying input schema and feature length Fτ to a shared latent representation X̃τ with fixed feature length K:
enc: X −→ XK , Xτ ∈ RN×Fτ 7−→ X̃τ ∈ RN×K (4)
where N represents the number of instances in Xτ , Fτ is the number of features of task τ which varies across tasks, and K is the size of the desired feature space. By combining this encoder with model ŷ that works on a fixed input size K and outputs the predicted target e.g. binary classification, it is possible to apply the REPTILE algorithm to learn an initialization θinit across tasks with different schema. The optimization objective then becomes the meta-loss for the combined network f = ŷ ◦ enc over a set of tasks T :
argmin θinit
Eτ∼T Lτ ( Y testτ , f ( X testτ ; θ (u) τ )) s.t. θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , f ; θ init ) (5)
where θinit is the set of initial weights for the combined network f consisting of enc with parameters θenc and model ŷ with parameters θŷ, and θ (u) τ are the updated weights after applying the learning procedure A for u iterations on the task τ as defined in Algorithm 1 for the inner updates of REPTILE. It is important to mention that learning one weight parameterization across any heterogeneous set of tasks is extremely difficult since it is most likely impossible to find one initialization for two tasks with a vastly different number and type of features. By contrast, if two tasks share similar features, one can align the similar features to a common representation so that a model can directly learn across different tasks by transforming the tasks as illustrated in Figure 1.
3.2 CHAMELEON
Consider a set of tasks where a right stochastic matrix Πτ exists for each task that reorders predictor data Xτ into X̃τ having the same schema for every task τ ∈ T :
X̃τ = Xτ ·Πτ ,where (6) x̃1,1 . . . x̃1,K... . . . ... x̃N,1 . . . x̃N,K ︸ ︷︷ ︸
X̃τ
= x1,1 . . . x1,Fτ... . . . ... xN,1 . . . xN,Fτ ︸ ︷︷ ︸
Xτ
· π1,1 . . . π1,K... . . . ... πFτ ,1 . . . πFτ ,K ︸ ︷︷ ︸
Πτ
Every xm,n represents the feature n of sample m. Every πm,n represent how much of feature m (from samples in Xτ ) should be shifted to position n in the adapted input X̃τ . Finally, every x̃m,n represent the new feature n of sample m in Xτ with the adpated shape and size. In order to achieve the same X̃τ when permuting two features of a task Xτ , we must simply permute the corresponding rows in Πτ to achieve the same X̃τ . Since Πτ is a right stochastic matrix, the summation for every row of Πτ is set to be equal to 1 as in ∑ i πj,i = 1, so that each value in Πτ simply states how much a feature is shifted to a corresponding position. For example: Consider that task a has features [apples, bananas, melons] and task b features [lemons, bananas, apples]. Both can be transformed to the same representation [apples, lemons, bananas, melons] by replacing missing features with zeros and reordering them. This transformation must have the same result for a and b independent of their feature order. In a real life scenario, features might come with different names or sometimes their similarity is not clear to the human eye. Note that a classic autoencoder is not capable of this as it is not invariant to the order of the features. Our proposed component, denoted by Φ, takes a task as
input and outputs the corresponding reordering matrix:
Φ(Xτ , θenc) = Π̂τ (7)
The function Φ is a neural network parameterized by θenc. It consists of three 1D-convolutions, where the last one is the output layer that estimates the alignment matrix via a softmax activation. The input is first transposed to size [Fτ ×N ] (where N is the number of samples) i.e., each feature is represented by a vector of instances. Each convolution has kernel length 1 (as the order of instances is arbitrary and thus needs to be permutation invariant) and a channel output size of 8, 16, and lastly K. The result is a reordering matrix displaying the relation of every original feature to each of the K features in the target space. Each of these vectors passes through a softmax layer, computing the ratio of features in Xτ shifted to each position of X̃τ . Finally, the reordering matrix can be multiplied with the input to compute the aligned task as defined in Equation (6). By using a kernel length of 1 in combination with the final matrix multiplication, the full architecture becomes permutation invariant in the feature dimension. Column-wise permuting the features of an input task leads to the corresponding row-wise permutation of the reordering matrix. Thus, multiplying both matrices results in the same aligned output independent of permutation. The overall architecture can be seen in Figure 2. The encoder necessary for training across tasks with different predictor vectors with REPTILE by optimizing Equation (5) is then given as:
enc: Xτ 7−→ Xτ · Φ(Xτ , θenc) = Xτ · Π̂τ (8)
3.3 REORDERING TRAINING
Only joint-training the network ŷ ◦ enc as described above, will not teach CHAMELEON denoted by Φ how to reorder the features to a shared representation. That is why it is necessary to train Φ specifically with the objective of reordering features (reordering training). In order to do so, we optimize Φ to align novel tasks by training on a set of tasks for which the reordering matrix Πτ exists such that it maps τ to the shared representation. In other words, we require a meta-dataset that contains not only a set of similar tasks τ ∈ T with different schema, but also the position for each feature in the shared representation given by a permutation matrix. If Πτ is known beforehand for each τ ∈ T , optimizing Chameleon becomes a simple supervised classification task based on predicting the new position of each feature in τ . Thus, we can minimize the expected reordering loss over the meta-dataset:
θenc = argmin θenc
Eτ∼T LΦ ( Πτ , Π̂τ ) (9)
where LΦ is the softmax cross-entropy loss, Πτ is the ground-truth (one-hot encoding of the new position for each variable), and Π̂τ is the prediction. This training procedure can be seen in Algorithm (2). The trained CHAMELEON model can then be used to compute the Πτ for any unseen task τ ∈ T .
Algorithm 2 Reordering Training Input: Meta-dataset T = {(X1,Π1), ..., (X|T |,Π|T |)}, latent dimension K, learning rate γ
1: Randomly initialize parameters θenc of the CHAMELEON model 2: for training iteration = 1, 2, ... do 3: randomly sample τ ∼ T 4: θenc ←− θenc − γ∇LΦ(Πτ ,Φ(Xτ , θenc)) 5: end for 6: return Trained parameters θenc of the CHAMELEON model
After this training procedure, we can use the learned weights as initialization for Φ before optimizing ŷ ◦ enc with REPTILE without further using LΦ. Experiments show that this procedure improves our results significantly compared to only optimizing the joint meta-loss.
Training the CHAMELEON component to reorder similar tasks to a shared representation not only requires a meta-dataset but one where the true reordering matrix Πτ is provided for every task. In application, this means manually matching similar features of different training tasks so that novel tasks can be matched automatically. However, it is possible to sample a broad number of tasks from a
single dataset by sampling smaller sub-tasks from it, selecting a random subset of features in arbitrary order for N random instances. Thus, it is not necessary to manually match the features since all these sub-tasks share the same Π̂τ apart from the respective permutation of the rows as mentioned above.
4 EXPERIMENTAL RESULTS
Baseline and Setup In order to evaluate the proposed method, we investigate the combined model ŷ ◦ enc with the initialization for enc obtained by pretraining CHAMELEON as defined in Equation 9 before using REPTILE to jointly optimize ŷ ◦ enc. We compare the performance with an initialization obtained by running REPTILE on the base model ŷ by training on tasks padded to a fixed size K as ŷ is not schema invariant. Both initializations are then compared to the performance of model ŷ with random Glorot initialization (Glorot & Bengio, 2010) (referred to as Random). In all of our experiments, we measure the performance of a model and its initialization by evaluating the validation data of a task after performing three update steps on the respective training data. All experiments are conducted in two variants: In Split experiments, test tasks contain novel features in addition to features seen during meta-training. In contrast, test tasks in No-Split experiments only consist of features seen during meta-training. While the Split experiments evaluate the performance of the model when faced with novel features during meta-testing, the No-Split experiments can be used to compare against a perfect alignment by repeating the baseline experiment with tasks that are already aligned (referred to as Oracle). A detailed description of the utilized models is found in Appendix B.
Meta-Datasets For our main experiments, we utilize a single dataset as meta-dataset by sampling the training and test tasks from it. This allows us to evaluate our method on different domains without matching related datasets since Π̂τ is naturally given for a subset of permuted features. Novel features can also be introduced during testing by splitting not only the instances but also the features of a dataset in train and test partition (Split). Training tasks are then sampled by selecting a random subset of the training features in arbitrary order forN instances. Stratified sampling guarantees that test tasks contain both features from train and test while sampling the instances from the test set only. For all experiments, 75% of the instances are used for reordering training of CHAMELEON and joint-training of the full architecture, and 25% for sampling test tasks. For Split experiments, we further impose a train-test split on the features (20% of the features are restricted to the test split). Our work is built on top of REPTILE (Nichol et al., 2018b) but can be used in conjunction with any model-agnostic meta-learning method. We opted to use REPTILE since it does not require second-order derivatives, and the code is publicly available (Nichol et al., 2018a) while also being easy to adapt to our problem.
Main Results We evaluate our approach using the OpenML-CC18 benchmark (Bischl et al., 2017) from which we selected 23 datasets for few-shot classification. The details of all datasets utilized in this work are summarized in Appendix B. The results in Figure 3 display the model performance after performing three update steps on a novel test task to illustrate the faster convergence. The graph shows a clear performance lift when using the proposed architecture after pretraining it to reorder tasks. This demonstrates to the best of our knowledge the first few-shot classification approach, which successfully learns across tasks with varying schemas (contribution 2). Furthermore, in the No-Split results one can see that the performance of the proposed method approaches the Oracle performance, which suggests an ideal feature alignment. When adding novel features during test time (Split) CHAMELEON is still able to outperform the other setups although with a lower margin.
Ablations We visualize the result of pretraining CHAMELEON on the Wine dataset (from OpenMLCC18) in Figure 6 to show that the proposed model is capable of learning the correct alignment between tasks. One can see that the component manages to learn the true feature position in almost all cases. Moreover, this illustration does also show that CHAMELEON can be used to compute the similarity between different features by indicating which pairs are confused most often. For example, features two and four are showing a strong correlation, which is very plausible since they depict the free sulfur dioxide and total sulfur dioxide level of the wine. This demonstrates that our proposed architecture is able to learn an alignment between different feature spaces (contribution 1).
Furthermore, we repeat the experiments on the OpenML-CC18 benchmark in two ablation studies to measure the impact of joint-training and the proposed reordering training (Algorithm 2). First, we do not train CHAMELEON with Equation 9, but only jointly train ŷ ◦ enc with REPTILE to evaluate the influence of adding additional parameters to the network without pretraining it. Secondly, we use REPTILE only to update the initialization for the parameters of ŷ while freezing the pretrained parameters of enc in order to assess the effect of joint-training both network components. These two variants are referred to as Untrain and Frozen. We compare these ablations to our approach by conducting a Wilcoxon signed-rank test (Wilcoxon, 1992) with Holm’s alpha correction (Holm, 1979). The results are displayed in the form of a critical difference diagram (Demšar, 2006; Ismail Fawaz et al., 2019) presented in Figure 4. The diagram shows the ranked performance of each model and whether they are statistically different. The results confirm that our approach leads to statistically significant improvements over the random and REPTILE baselines when pretraining CHAMELEON. Similarly, our approach is also significantly better than jointly training the full architecture without pretraining CHAMELEON (UNTRAIN), confirming that the improvements do not stem from the increased model capacity. Finally, comparing the results to the FROZEN model shows improvements that are not significant, indicating that a near-optimal alignment was already found during pretraining. A detailed overview for all experimental results is given in Appendix C.
Latent Embeddings Experiments Learning to align features is only feasible for unstructured data since this approach would not preserve any structure. However, it is a widespread practice among few-shot classification methods, and computer vision approaches in general, to use a pretrained model to embed image data into a latent space before applying further operations. We can use CHAMELEON to align the latent embeddings of image datasets that are generated with different networks. Thus, it is possible to use latent embeddings for meta-training while evaluating on novel tasks that are not yet embedded in case the embedding network is not available, or the complexity of different datasets requires models with different capacities to extract useful features. We conduct an additional experiment for which we combine two similar image datasets, namely EMNIST-Digits and EMNIST-Letters (Cohen et al., 2017). Similar to the work of Rusu et al. (2019), we train one neural network on each dataset in order to generate similar latent embeddings with different schema, namely 32 and 64 latent features. Afterward, we can sample training tasks from one embedding while
evaluating on tasks sampled from the other one. In the combined experiments, the full training is performed on the EMNIST-Letters dataset, while EMNIST-Digits is used for testing. Splitting the features is not necessary as the train, and test features are coming from different datasets. The results of this experiment are displayed in Figure 5. It shows the accuracy of EMNIST-Digits averaged across 5 runs with 1,600 generated tasks per run during the REPTILE training on EMNIST-Letters for the different model variants. Each test task is evaluated by performing 3 update steps on the training samples and measuring the accuracy of its validation data afterward. One can see that our proposed approach reports a significantly higher accuracy than the REPTILE baseline after performing three update steps on a task (contribution 4). Thus, showing that CHAMELEON is able to transfer knowledge from one dataset to another. Moreover, simply adding CHAMELEON without pretraining it to reorder tasks (Untrain) does not lead to any improvement. This might be sparked by using a CHAMELEON component that has a much lower number of parameters than the base network. Only by adding the reordering-training, the model manages to converge to a suitable initialization. In contrast to our experiments on the OpenML datasets, freezing the weights of CHAMELEON after pretraining also fails to give an improvement, suggesting that the pretraining did not manage to capture the ideal alignment, but enables learning it during joint-training. Our code is available at BLIND-REVIEW.
5 CONCLUSION
In this paper, we presented, to the best of our knowledge, the first approach to tackle few-shot classification for unstructured tasks with different schema. Our model component CHAMELEON is capable of embedding tasks to a common representation by computing a matrix that can reorder the features. For this, we propose a novel pretraining framework that is shown to learn useful permutations across tasks in a supervised fashion without requiring actual labels. In experiments on 23 datasets of the OpenML-CC18 benchmark, our method shows significant improvements even when presented with features not seen during training. Furthermore, by aligning different latent embeddings we demonstrate how a single meta-model can be used to learn across multiple image datasets each embedded with a distinct network.
A APPENDIX - INNER TRAINING
We visualize the inner training for one of the experiments in Figure 7. It shows two exemplary snapshots of the inner test loss when training on a sampled task with the current initialization θinit before meta-learning and after 20,000 meta-epochs. It is compared to the test loss of the model when it is trained on the same task starting with the random initialization. For this experiment, models were trained until convergence. Note that both losses are not identical in meta-epoch 0 because the CHAMELEON component is already pretrained. The snapshots show the expected REPTILE behavior, namely a faster convergence when using the currently learned initialization compared to a random one.
B APPENDIX - EXPERIMENTAL DETAILS
The features of each dataset are normalized between 0 and 1. The Split experiments are limited to the 21 datasets which have more than four features in order to perform a feature split. We sample 10 training and 10 validation instances per label for a new task, and 16 tasks per meta-batch. The number of classes in a task is given by the number of classes of the respective dataset, as shown in Table 1. During the reordering-training phase and the inner updates of reptile, specified in line 6 of Algorithm (1), we use the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.0001 and 0.001 respectively. The meta-updates of REPTILE are carried out with a learning rate β of 0.01. The reordering-training phase is run for 4000 epochs. All results reported in this work are averaged over 5 runs.
OpenML-CC18 All experiments on the OpenML-CC18 benchmark are conducted with the same model architecture. The base model ŷ is a feed-forward neural network with two dense hidden layers that have 16 neurons each. CHAMELEON consists of two 1D-convolutions with 8 and 16 filters respectively and a final convolution that maps the task to the feature-length K, as shown in Figure 2. We selected dataasets that have up to 33 features and a minimum number of 90 instances per class. We limited the number of features and model capacity because this work seeks to establish a proof of concept for learning across data with different schemas. In contrast, very high-dimensional data would require tuning a more complex CHAMELEON architecture. The details for each dataset are summarized in Appendix 1. When sampling a task in Split, we sample between 40% and 60% of the respective training features. For test tasks in Split experiments 20% of the features are sampled from the set of test features to evaluate performance on similar tasks with partially novel features. For each
experimental run, the different variants are tested on the same data split, and we sample 1600 test tasks beforehand, while the training tasks are randomly sampled each epoch. All experiments are repeated five times with different instance and, in the case of Split, different feature splits, and the results are averaged.
Latent Embeddings Both networks used for generating the latent embeddings consist of two convolutional and two dense hidden layers with 64 neurons each, but the number of neurons in the output layer is 32 for EMNIST-Digits and 64 for EMNIST-Letters. For these experiments, the CHAMELEON component still has two convolutional layers with 8 and 16 filters, while we use a larger base network with two feed-forward layers with 64 neurons each. All experimental results are averaged over five runs.
C APPENDIX - TABLES WITH EXPERIMENTS RESULTS
The following tables show the detailed results of our experiments on the OpenML-CC18 datasets for Split and NoSplit settings. The tables contain the loss and accuracy for the the base model ŷ trained from a random initialization and with REPTILE, and our proposed model ŷ ◦ enc with the additional ablation studies Untrain and Frozen:
D PROBLEM SETTING: GENERAL MULTI-TASK LEARNING.
We describe a classification dataset with vector-shaped predictors (i.e., no images, time series etc.) by a pair (X,Y ) ∈ RN×F × {0, ..., C}N , with predictors X and targets Y , where N denotes the number of instances, F the number of predictors and C the number of classes. Let DF := ⋃ N∈N RN×F × {0, ..., C}N be the space of all such datasets with F predictors and
D := ⋃F∈NDF be the space of any such dataset. Let us also denote the space of all predictor matrices with F predictors by XF := ⋃ N∈N RN×F and all predictor matrices by X := ⋃ F∈N XF . Then a dataset τ = (X,Y ) ∈ D equipped with a predefined training/test split, i.e. the quadruplet τ = (X trainτ , Y train τ , X test τ , Y test τ ) is called a task. A collection of such tasks T ⊂ D is called a metadataset. Similar to splitting a single data set into a training and test part, one can split a meta-dataset T = T train ∪̇ T test. Consider a meta-dataset of correlated tasks T ⊂ D, such that the predictor variables {pτ1 , pτ2 , . . . , pτF } of any individual task τ are contained in a common set of predictor variables {p1, p2, . . . , pK}. As elucidated in the previous section, our goal is to construct an encoder that learns to match these predictors and map the features of any task τ ∈ T into a shared latent space RK . enc: X −→ XK , X ∈ RN×F 7−→ X̃ ∈ RN×K (10) This encoder can be combined with a parametric model of fixed input size ŷ : RK → {0, 1} (e.g. neural network or SVM) such that for the joint model ŷ ◦ enc an initialization θinit can be learned via MAML or REPTILE across all tasks, even when those may not have the same predictor vector. Just as with MAML, this initialization facilitates rapid convergence of the combined model ŷ ◦ enc on any new, previously unseen task T ∈ T test. More explicitly, the ultimate goal is to minimize the meta test loss
L (θinit) := ETτ∼T testLτ ( Y testτ , ŷ ◦ enc ( X testτ ; θ (u) τ )) (11)
here Lτ is the task specific loss (e.g. miss-classification rate) of the model on the test data of Tτ , using the updated parameters θ(u)τ . The latter are the updated parameters of the joint model ŷ ◦ enc which are obtained by minimizing Lτ on the training data (X train, Y train) of Tτ via some learning iterative learning algorithm A (e.g. Gradient Descent) for u iterations.
θ(u)τ = A(u) ( X trainτ , Y train τ , Lτ , ŷ ◦ enc; θinit ) (12)
MAML and REPTILE are solving sub-problems when the number F of features is fixed and the predictors of all tasks are the same and aligned, i.e., the same predictor always occurs at the same position within the predictor vector, thus the identity can be used as predictor encoder. This problem alternatively can be described as a supervised learning problem with a multivariate or structured target. | 1. What is the focus of the paper regarding meta-learning?
2. What are the strengths of the proposed approach, particularly in addressing a new problem?
3. What are the weaknesses of the paper, especially regarding experimental details and necessity of supervised training?
4. Do you have any questions about the construction of the reordering training procedure or the determination of the target permutation matrix?
5. Would additional experiments on other datasets strengthen the results?
6. How many features are used in the study, and how might the performance change with more or fewer features?
7. Are there any typos or errors in the paper that should be addressed? | Review | Review
Previous meta-learning approaches typically focus on tasks that share the same input types, e.g. images. This paper addresses the problem of meta-learning weight initialization across tasks with different types of input features. It proposes Chameleon model that learns to align input features from different tasks by learning a permutation matrix for each task, and shows that Chameleon can successfully learn good initialization.
Strength:
It identifies and tackles a new important problem in meta-learning: meta-learning on tasks with different input features.
The proposed approach is simple but shows improvements over the baseline method.
Weakeness:
Supervised training for the permutation matrix is necessary for the model to perform well.
Experimental results section can be more detailed. Given that Algorithm 2 is the major part of the method, how is the reordering training procedure constructed? How is the target permutation matrix determined? are there / what are the shared features between different tasks?
Would be great if experiments are done on one or two more datasets to strengthen the result.
Additional Comments:
How many features are used? How would the performance change if there are more/fewer features?
typo: Equation (9) is mentioned several times
I believe this paper proposed a new interesting problem in meta-learning and provided a simple effective model to address the problem. |
ICLR | Title
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
Abstract
It is well known that the graph classification performance of graph neural networks often improves by adding an artificial virtual node to the graphs, which is connected to all nodes in the graph. Intuitively, the virtual node provides a shortcut for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes with other problems is still an open research question. In this paper, we adapt the concept of virtual nodes to the link prediction scenario, where we usually have much larger, often dense, and more heterogeneous graphs. In particular, we use multiple virtual nodes per graph and graph-based clustering to determine the connections to the graph nodes. We also investigate alternative clustering approaches (e.g., random or more advanced) and compare to the original model with a single virtual node. We conducted extensive experiments over different datasets of the Open Graph Benchmark (OGB) and analyze the results in detail. We show that our virtual node extensions yield rather stable performance increases and allow standard graph neural networks to compete with complex state-of-the-art models, as well as with the models leading the OGB leaderboards.
1 INTRODUCTION
Link prediction is an important task to complete graphs that are missing edges in various domains: citation networks (Kipf & Welling, 2016), social networks (Adamic & Adar, 2003), medical drug interaction graphs (Abbas et al., 2021), or knowledge graphs (KGs) (Ji et al., 2021). Numerous kinds of models have been proposed to solve the link prediction problem, ranging from KG-specific predictors (Ji et al., 2021) to graph neural networks (GNNs) (Kipf & Welling, 2016; Zhang & Chen, 2018). Over dense biomedical networks, GNNs turned out to work especially well (Hu et al., 2020).
In this work, we focus on graph neural networks for link prediction. Many of the popular GNNs are based on the message-passing scheme, which computes node embeddings based on iteratively aggregating the features of (usually direct/one-hop) neighbor nodes along the graph edges (Gilmer et al., 2017). Interestingly, best performance is usually obtained by only considering two to three hops of neighbors (i.e., 2-3 layers in the GNN). One main reason identified for this is over-smoothing, the problem that node representations become indistinguishable when the number of layers increases (Li et al., 2018). The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies (Alon & Yahav, 2021). While it is likely that link prediction most often depends on the local node neighborhood, it is not beyond imagination that there are critical long-range dependencies (e.g., complex chains of drug-drug or drug-protein interactions). Hence, using a small number of layers to overcome the above problems results in under-reaching.
There have been several recent proposals to overcome under-reaching. On the one hand, several works propose techniques that allow for larger numbers of GNN layers (Xu et al., 2018; Wu et al., 2019; Liu et al., 2020; Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the link prediction experiments in these works consider citation or recommendation networks, but not the especially dense biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data. On the other hand, there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood: based on graph diffusion (Atwood & Towsley, 2016; Klicpera et al., 2019a; Abu-El-Haija et al., 2019; Xu et al., 2019a; Ma et al., 2020; Klicpera et al., 2019b) and other theories (Morris et al., 2019; You
et al., 2019). However, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) (Hu et al., 2020), several ran out of memory. Moreover, the majority has not considered link prediction, while this problem was recently shown to be more difficult than node classification (Zhang et al., 2020).
In this paper, we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Virtual nodes are well known to often improve the graph classification performance of graph neural networks, where an artificial virtual node is added to every graph and connected to all nodes in the graph. While the virtual nodes were originally thought as representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes for the link prediction problem has not been investigated yet. The reason for this might be that the often very large and heterogeneous “network” graphs in link prediction are of very different nature and require novel/adapted solutions (e.g., a protein interaction network may easily contain millions of nodes, whereas a molecule to be classified contains usually less than fifty).
We explore application and effects of virtual nodes in link prediction theoretically and empirically:
• We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes. Consider Figure 1. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node. In this way, under-reaching is decreased because clustered nodes can share information easily; at the same time, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model). • We also investigate alternative methods to determine the virtual node connections (e.g., randomization in clustering) and compare to the original model with a single virtual node. • We theoretically investigate the benefit of using (multiple) virtual nodes in terms of two aspects: influence score and the expressiveness in learning a structural link representation. • We conducted extensive experiments over challenging datasets of different type, provide ablation studies that confirm the superiority of our proposed techniques, analyze the results in detail, and provide first guidelines about how to use virtual nodes with different types of data and GNNs. • Most importantly, we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing, as well as with the models leading the OGB leaderboards.
2 RELATED WORK
We give an overview on approaches that are similar from a technical perspective; for a more detailed summary, see Appendix A. For a more general overview of the large and diverse field of link prediction, we refer to good summaries in recent works (Martínez et al., 2016; Zhang et al., 2020).
Deeper GNNs. Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching. These models range from the simple but efficient message propagation in SGC (Wu et al., 2019; Liu et al., 2020) and APPNP (Klicpera et al., 2019a) and connections in JKNet (Xu et al., 2018), to more advanced proposals (Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a) such as the differentiable aggregation functions in DeeperGCN (Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the experiments in most of these works consider citation or recommendation networks, but not the especially dense and important biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data.
Beyond One-Hop Neighbors. Recently, graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. Atwood & Towsley (2016) use k-hop random walks to extend the node features. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models concatenate (Abu-El-Haija et al., 2019) or aggregate (Xu et al., 2019a; Ma et al., 2020) node embeddings in every layer using a diffusion-based transition matrix. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020) and attention (Wang et al., 2020). Morris et al. (2019) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm. All the above approaches are relatively complex, many terminated with memory errors in our experiments, and few have been evaluated for link prediction.
Virtual Nodes. To the best of our knowledge, virtual nodes have only been considered in the context of graph classification so far, where a single virtual node (also called supernode) is added to the graph to be classified and connected to all graph nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction (i.e, via edges from the graph nodes) instead of bidirectionally (Li et al., 2017).
There are some GNNs which point out special nodes that we could consider as “virtual”. Fey et al. (2020) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based, molecule-specific algorithm and then applies message passing within and between these clusters. The graph-partition based message passing from Liao et al. (2018) also used clustering, but it just divides the original messages into inter- and intra-cluster. Our approach creates new “paths” in the graph and we theoretically demonstrate its expressiveness. P-GNN (You et al., 2019) assigns nodes to random clusters (“anchor-sets”) and then creates a message for each node for every anchor-set, while ignoring the message passing from original direct neighbors. Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors. We also explore the idea of similar random assignments in our context, but show that more elaborate techniques generally work better. Most importantly, we do not propose a specific, new GNN but a new technique for augmenting existing graph neural networks.
Although it is a well-known trick, the advantage of using virtual nodes has never been theoretically investigated nor fully understood. We focus on link prediction and considerably extend the virtual node technique. There are commonalities in the advantages of using virtual nodes for graph classification and link prediction, but their role in link prediction is to improve the representation of the link instead of the graph (nodes). We analyze theoretically and empirically how they improve GNN performance.
3 PRELIMINARIES
Link Prediction. We consider an undirected graphG = (V,E) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation. All our techniques work for directed graphs and, with simple adaptation, also for graphs with labelled edges. We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors. Given two nodes, the link prediction task is to predict whether there is a link between them.
Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described by Gilmer et al. (2017). These networks compute for every v ∈ V a node representation h`v at layer ` ∈ [1, 2, . . . , k], by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h`−1v as below; h 0 v are the initial node features.
h`v = COMBINE ` ( h`−1v , AGGREGATE ` ( {h`−1u | u ∈ Nv} )) (1)
Link prediction with GNNs is usually done by combining (e.g., concatenating) the final representations hLu , h L v , of the nodes u, v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. We follow this approach.
We further use [1, n] to denote an interval [1,2,. . . , n].
4 VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION
So far, virtual nodes have been only used for graph classification. Link prediction scenarios are different in that the graphs are usually very large, heterogeneous, sometimes dense, and the task is to predict a relationship that might strongly be influenced depending on surrounding relations. In the following, we propose approaches that fit these scenarios.
4.1 MULTIPLE VIRTUAL NODES
Our main goal of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes S = {s1, s2 . . . , sn}1 each being connected to a subset of graph nodes, as determined by an assignment σ : V → [1, n]; n is considered as hyperparameter. We propose different methods to obtain σ:
Random (GNN-RM). Most simple, we can determine a fixed σ randomly once with initialization.
Increased Randomness (GNN-RMF ). Similarly a random assignment, but initialized with every forward pass. In this way, a single bad assignment does not determine the overall performance.
Clustering (GNN-CM). Many types of graph data incorporate a certain cluster structure (e.g., collaboration or social networks) that reflects which nodes belong closely together. We propose to connect such nodes in a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment σ. More precisely, during initialization, we use a generic clustering algorithm which, given a number m, creates a set C = {C1, C2 . . . , Cm} of clusters (i.e., sets of graph nodes) by computing an assignment ρ : V → [1,m], assigning each graph node to a cluster. We then obtain σ by choosing m = n and σ = ρ.
In this work, we decided for the METIS clustering (Karypis & Kumar, 1998) which turned out to provide a good trade off between quality and efficiency. Nevertheless, our idea is generic and can be applied with arbitrary algorithms. We will show ablation experiments for alternatives (e.g., Graclus (Dhillon et al., 2007) and Diffpool (Ying et al., 2018b)).
Advanced Clustering (GNN-CM+). Not every type of graph data contains an inherent cluster structure or one that is sufficiently expressed. Furthermore, using a fixed clustering, we obtain a deterministic algorithm again taking the risk that we completely rely on a single, possibly not ideal, virtual node assignment – there may be critical long range dependencies that go beyond clusters. For these cases, we propose an alternative approach, which breaks up the determinism by extending the above clustering as follows. We choose a relatively large m, with m n, and apply the above clustering algorithm, during initialization. Then, in each epoch, we randomly guess an assignment σ′ : [1,m]→ [1, n] of clusters to virtual nodes and define σ(n) := σ′(ρ(n)). Note that this approach is inspired by Chiang et al. (2019), who apply a similar technique is to create batches based on clusters. Further note that we determine σ′ with every epoch instead of every forward pass since the the computation takes quite some time on large datasets and we observed that this yields good results.
4.2 THE MODEL
We integrate the multiple virtual nodes into a generic message-passing graph neural network by extending the approach from Hu et al. (2020) to the setting with multiple virtual nodes, by computing node representations h`v for a node v ∈ V at layer ` as follows:
h`si = COMBINE ` VN ( h`−1si , AGGREGATE ` VN ( {h`−1u | u ∈ V, σ(u) = i} )) (2)
h`v = COMBINE ` ( h`−1v +h ` sσ(v) , AGGREGATE` ( {h`−1u | u ∈ Nv} )) (3)
Note that the highlighted adaptation of the standard GNN processing from Equation (1) is only minor – but powerful. In our implementation, COMBINE`VN is addition combined with linear layers and layer normalization, and we use sum for AGGREGATE`VN .
1Since notation V is standard for nodes, we use S for the set of virtual nodes. Think of “supernodes”.
4.3 ANALYSIS: VIRTUAL NODES CHANGE INFLUENCE
Influence Score. Following (Xu et al., 2018; Klicpera et al., 2019a), we measure the sensitivity (also, influence) of a node x to a node y by the influence score I(x, y) = eT ∂h k x
∂h0y ; e is a vector of all ones,
hkx is the embedding of x at the k th layer, see Equations (1) and (3). For a k-layer GNN, the influence score is known to be proportional in expectation to the k-step random walk distribution from x to y:2
E[I(x, y)] ∝ Prw(x→ y, k) = ∑ r∈Rk k∏ `=1
1
deg(v`r) , (4)
(v0r , v 1 r , ..., v k r ) are the nodes in the path r from x := v 0 r to y := v k r , R k is the set of paths of length k. In what follows, we will exploit this relationship and argument in terms of the probability Prw.
Virtual Nodes. For simplicity, consider the influence score in an m-regular graph; there we have Prw(x→ y, k) = |R k| mk
. We hypothesize that we can come to similar conclusions in a general graph with average degree m. Consider the message passing between two distant nodes x and y. (I) In case the shortest path from x to y is of length > k, a k-layer GNN cannot capture it, and the probability Prw(x→ y, k) is obviously zero. If we then consider virtual nodes in the GNN layer (even with only one), we can pass messages from x to y through the virtual nodes and obtain a nonzero probability. (II) Consider the case where there is a shortest path of length ≤ k between x and y. By adding a virtual node s in one GNN layer, the probability changes to:
P srw(x→ y, k) = Prw(x→ y, k) + Prw(x→ s, s→ y) = |Rk|
(m+ 1)k +
1
(m+ 1)|V | . (5)
Compared to the original probability, we get the following impact ratio for using virtual nodes:
ir = mk
(m+ 1)k +
mk
(m+ 1)|V ||Rk| . (6)
Whenm is large enough, ir can be approximated by ir ' ( 1 + m k−1
|V ||Rk|
) . Here, we see that the impact
of virtual nodes grows when m increases. Our experiments confirm this theoretical observation.
Multiple Virtual Nodes. In view of multiple virtual nodes, the above analysis gets even more appealing. We continue along these lines and assume there is a shortest path of length ≤ k between x and y. If x and y connect to the same virtual node s, then Equation (5) changes as follows:
P srw(x→ y, k) = |Rk|
(m+ 1)k +
1
(m+ 1)|Cs| . (7)
Since the set Cs of nodes connecting to s is much smaller than V , the impact of multiple virtual nodes is greater than that of a single virtual node. On the other hand, if x and y do not connect to the same virtual node, the probability just slightly decreases from |R
k| mk to |R k| (m+1)k .
In Appendix B, we further show that using multiple virtual nodes is related to (but not equal to) the labeling trick (Zhang et al., 2020) and distance encoding (Li et al., 2020b), and it can theoretically improve the expressiveness in learning structural link representations (see Theorem 1 and Figure 4(b)).
5 EVALUATION
We conducted extensive experiments and ablation studies to empirically investigate:
• How does the existing approach with one virtual node perform in link prediction? • Do multiple virtual nodes improve performance, how do our proposed approaches compare? • In particular, are approaches based on the graph structure better? • How exactly do virtual nodes support link prediction? When do they help particularly?
2See Theorem 1 in (Xu et al., 2018). Note that the theorem makes some simplifying assumptions (e.g., on the shape of GNN).
types/amounts; (second) best results are (light) gray , overall best bold, second best underlined.
ddi ppa10 collab pubmed Hits@20 Hits@100 Hits@50 Hits@20
GCN 0.4076 ± 0.1073 0.1313 ± 0.0084 0.4955 ± 0.0064 0.9675 ± 0.0143 - VN 0.6217 ± 0.1241 0.1258 ± 0.0082 0.5049 ± 0.0088 0.9579 ± 0.0214 - RM 0.5532 ± 0.1262 0.1205 ± 0.0059 0.5083 ± 0.0109 0.9522 ± 0.0110 - RMF 0.5830 ± 0.0855 0.1116 ± 0.0094 0.5046 ± 0.0049 0.8100 ± 0.0781 - CM 0.6105 ± 0.1563 0.1299 ± 0.0050 0.5181 ± 0.0076 0.9575 ± 0.0230 - CM+ 0.6033 ± 0.1759 0.1399 ± 0.0071 0.5128 ± 0.0129 0.9189 ± 0.0514 SAGE 0.6173 ± 0.1068 0.1024 ± 0.0050 0.5662 ± 0.0149 0.9779 ± 0.0105 - VN 0.6491 ± 0.1360 0.0853 ± 0.0154 0.5875 ± 0.0091 0.9659 ± 0.0333 - RM 0.7068 ± 0.1174 0.1131 ± 0.0039 0.5830 ± 0.0087 0.9433 ± 0.0208 - RMF 0.7564 ± 0.1055 0.1105 ± 0.0023 0.6067 ± 0.0063 0.9800 ± 0.0087 - CM 0.7621 ± 0.1157 0.1077 ± 0.0150 0.6056 ± 0.0105 0.9834 ± 0.0068 - CM+ 0.8251 ± 0.0678 0.0963 ± 0.0099 0.5940 ± 0.0262 0.9754 ± 0.0139 GIN 0.4321 ± 0.1353 0.1139 ± 0.0058 0.5768 ± 0.0179 0.9234 ± 0.0166 - VN 0.5260 ± 0.1227 0.1316 ± 0.0049 0.5863 ± 0.0254 0.9790 ± 0.0070 - RM 0.5084 ± 0.1324 0.1337 ± 0.0045 0.5412 ± 0.0174 0.9604 ± 0.0158 - RMF 0.5310 ± 0.1453 0.1269 ± 0.0026 0.5335 ± 0.0087 0.7986 ± 0.0993 - CM 0.5664 ± 0.0860 0.1349 ± 0.0034 0.5821 ± 0.0081 0.9125 ± 0.0378 - CM+ 0.4339 ± 0.1855 0.1591 ± 0.0069 0.5557 ± 0.0026 0.9037 ± 0.0262
Datasets. We focused on challenging data from the OGB: ddi, a drug-drug interaction network; ppa10, a subset of the protein-protein association network ppa containing only 10% of the train edges (but full valid/test); and collab, an author collaboration network. To learn more about smaller data of similar type, we also tested on the citation networks pubmed (Yang et al., 2016). Since the datasets are not only very different in type but also in various other critical graph parameters and this is reflected in the performance of the models, we show relevant statistics in Table 1.3 The datasets vary strongly in size with ddi being smallest among the biomedical; on the other hand, ddi is very dense. The clustering coefficient intuitively reflects the “cliquishness” of the graph’s subgraphs. The large diameters suggest that the data suits testing under-reaching. Appendix C gives further details and describes datsets we consider in additional experiments in the appendix.
Baselines For a competitive comparison, we considered important baselines (described in Section 2):
• The deep GNNs SGC, APPNP, DeeperGCN, and two variants of JKNet. • Approaches extending message passing: P-GNN, APPNP, GCN-GDC, SAGE-GDC, GIN-GDC. • The popular GNNs GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), and GIN (Xu
et al., 2019b), which we then extend with (multiple) virtual nodes.
3See Tables 2 and 3 in (Hu et al., 2020). We computed the numbers for ppa10 (which we focus on due to a lack of resources), and pubmed using the same techniques.
5.1 RESULTS
Overall Impact of Virtual Nodes, Tables 2, 7, 8 (Appendix). We compare to GCN, SAGE, and GIN. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi, slight improvements over collab, but no definitive ones over ppa10; over pubmed, it works very well for GIN. The numbers for GNN-RM and GNN-RMF reflect the randomness of their connections to the virtual nodes, there is no clear trend. Nevertheless, they clearly outperform the original models, with only few exceptions. The increased randomness by re-assigning the virtual nodes with every forward pass (GNN-RMF ) seemingly suits SAGE but not the others. As expected, over the small pubmed/cora, which also have no cluster structure, the results are not consistent or convincing overall; virtual nodes only yield improvement sometimes, and none for GCN. Yet, on the more challenging datasets, multiple virtual nodes turn out to be an efficient means to boost the link prediction performance of GNNs if they are applied correctly. Our virtual node connections based on the graph structure (GNN-CM) yield consistently good improvements over ddi and collab, and mostly help on the challenging ppa10 dataset. On collab, we did further experiments using GAT (Veličković et al., 2017) and also observe a clear performance gain: 0.4745 vs. 0.5876 (GAT-CM). GNN-CM and GNN-CM+ are not always the best ones, but yield reliably good results, in contrast to the other models with virtual nodes (see variability of gray shades). Interestingly, the advanced clustering yields especially good performance over ppa10/ppa, while its results on the other datasets are not convincing. Generally, the improvements of the virtual node models are strongest on ddi. For an in-depth result analysis see Section 5.2.
Comparison to Related Works and SOTA, Table 3. Most deep GNNs as well as the models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. However, most of the original evaluations focus on node or graph classification and consider very different types of data – often the standard citation networks (Lu & Getoor, 2003) – and, in fact, on collab we see the best numbers. For a more detailed discussion of P-GNN see Appendix H. Even if we assume that these numbers can be improved, the models do not seem apt for link prediction; in particular, the complex ones: many do not run at all on realistic link prediction data but yield memory errors. Further, our virtual node extensions make standard GNNs competitive to the models on the leaderboard. In particular, their performance is much more stable. The results of the best models from the leaderboard vary strongly with the different datasets, or have not been reported at all. None of these models can be called “good” overall, given the numbers in the - sometimes even missing - rest of the table; in fact, SEAL and Adamic Adar perform rather bad on the very dense ddi.
Impact of Virtual Nodes on Number of GNN Layers and Efficiency, Figure 2. For the virtual nodes models, the scores increase with the number of layers for a longer time, GCN drops earlier. On ddi, GCN-VN and -CM reach their best scores at 6 and 8 layers, respectively, which is remarkable for that very dense dataset, which is prone to over-smoothing. On collab it is the other way around. The figure also gives an idea about the runtime increase with using virtual nodes. It compares the 6-layer models, and shows the 4-layer GCN-CM which obtains performance similar to the 6-layer GCN-VN.
Impact of Virtual Node Number, Figure 3. First, consider the configurations of the best models for the overall results in Table 2, which are provided in Table 6 in the appendix. Here, we see that the chosen numbers of virtual nodes are indeed random for the “random” models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis in Section 4.3. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes. Note that there is a trade off between number of virtual nodes and intra-cluster test edges, discussed in Section 5.2.
Using Virtual Nodes Only at the Last GNN Layer, Table 4. Alon & Yahav (2021) show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we see that this can lead to extreme performance drops.
Impact of Clustering Algorithm, Table 9 (Appendix). Our architecture is generic in the clustering algorithm, and we investigated the effects of varying that. Graclus is similar in nature to METIS in that it also creates partitions based on the adjacency matrix, but it took much longer to run. Diffpool considers the node features and yields improvements for GCN, but does not scale to larger datasets. Over ddi, there is no clear winner and, given its efficiency, METIS turns out to be a good solution.
5.2 DISCUSSION AND CONCLUSIONS
The results show that our approach with multiple virtual nodes based on graph-based clustering yields performance increases for various GNNs and types of data, but there are clear differences.
Dense Graphs with Medium/High Clustering Coefficient. Over ddi, we see strongest improvements for all virtual-node models. This can be explained by our proposed theory, showing that a very large node degree m increases the impact of the virtual node(s), and thus decreases the negative impact of the (too) many other neighbors (see Equation (6)). Furthermore, the empirical results confirm our proposed theory regarding multiple virtual nodes (see Equation (7)). We see particularly good numbers for GNN-CM, which exploits the clustering inherent in the given graph. GNN-CM+, which considers this given clustering only on a lower level, is shown to perform worse than GNN-CM overall. In fact, we computed the percentage of test edges that occur in the “virtual node cluster” (see Table 11 in the appendix) and it shows that the numbers for the advanced clustering are very similar to the random one, meaning the randomly merged smaller clusters break the data’s structure too much. Interestingly, the experiments show that, even with the dense data that is prone to over-smoothing, virtual nodes make the GNNs score higher with more than the standard 2-3 layers; hence virtual nodes seem to alleviate over-smoothing to some extent, an interesting question for future work.
Graphs with Large Problem Radius and Low Clustering Coefficient. Over ppa10, all GNNs use an unusually high number of layers, which hints at a large problem radius (e.g., GCN, which performs especially well, uses 7 layers). Given the very low clustering of the data in addition, ppa10 represents a special challenge. With the multiple virtual nodes, GNN-CM performs again better than GNN-VN. On the other hand, it does not perform much better than the random models on data without cluster structure. This can be explained by its choice of number of virtual nodes, which is consistently high, but then there are less test edges within a virtual node cluster (see appendix Table 11). We hence see here that the positive effect of having many virtual nodes (recall Equation (7)) cancel out the benefits of clustering. Our advanced clustering, which merges some local clustering with randomness, is able to achieve best results with GCN and GIN (with SAGE, all models perform rather bad over ppa10). This can be explained by the fact that it randomly merges some local clusters – with each epoch anew – and hence allows more messages to pass across “virtual node clusters”. We also did some experiments over the very large ppa, which is denser than ppa10, and see a similar trend.
Sparse Graphs with Low to High Clustering Coefficient. We tested on three citation/collaboration networks of different sizes. Note that, over this data, the problem radius is usually assumed to be rather small (Alon & Yahav, 2021), although the graph diameters are large. We investigated virtual nodes to augment link prediction in large and complex graphs; but we also want to provide insight into the behavior on smaller data. Over pubmed (similarly on cora as shown in the appendix), virtual nodes do not provide any improvement for GCN. For GIN, a single virtual node yields good increases; overall, it usually outperforms the settings with multiple virtual nodes. We hypothesize that this is mainly due to the small graph size and sparsity. In fact, on the larger and denser collab, GNN-CM performs very good for all GNNs. The trends in the models’ performance and the corresponding explanations are similar to those for ddi but much less expressed, probably due to the much smaller node degrees. Yet, the performance is much more stable, possibly because it is larger and not as dense.
Conclusions. We summarize our main findings to provide first guidelines for applying virtual nodes:
• Small + Sparse Graphs: A single virtual node is likely to boost performance of GIN, and virtual nodes should help with SAGE, but probably not with GCN. • Large + Sparse Graphs: If there is cluster structure, GNN-CM should yield stable performance increases. If the problem radius is large or there is few cluster structure, GNN-CM+ is worth a try. • Dense Graphs + Clustering: Multiple virtual nodes (i.e., GNN-CM) likely increase performance.
6 CONCLUSIONS
We propose a simple but elegant graph neural network extension using multiple virtual nodes that may considerably increase link prediction performance. We also advance research by providing theoretical justifications - the very first about applying virtual nodes at all - and by showing their positive impact in various experiments. Future work includes the design of more advanced and scalable architectures, and it would be interesting to further investigate the huge performance increases on dense graphs.
A ADDITIONAL DETAILS ON RELATED WORKS
Deeper GNNs. We mention simpler approaches in the Section 2. More advanced proposals are, for example, based on special features and connections (Chen et al., 2020), community-based normalization of node representations using random clustering (Zhou et al., 2020), boosting techniques (Sun et al., 2021), or differentiable aggregation functions in DeeperGCN (Li et al., 2020a).
Beyond One-Hop Neighbors. Graph diffusion methods (i.e., in graph theory, techniques for spreading information between nodes) are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. (Atwood & Towsley, 2016) use k-hop random walks to aggregate node features and extend the latter by the aggregated ones. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models aggregate node embeddings in every layer, GraphHeat (Xu et al., 2019a) using the heat kernel, PAN (Ma et al., 2020) the transition matrix of maximal entropy random walks, and PinSage (Ying et al., 2018a) using random walks. (Abu-El-Haija et al., 2019) propose to concatenate embeddings aggregated using the transition matrices of k-hop random walks before applying one-hop neighbor aggregation. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020). Recently, (Wang et al., 2020) integrated attention with diffusion-based message propagation.
Position Encodings. Our approach provides a kind of positional embedding (Srinivasan & Ribeiro, 2019) and hence has some commonalities with models extending nodes with positional encodings, e.g., (Li et al.).
B ADDITIONAL THEORETICAL RESULTS: STRUCTURAL LINK REPRESENTATION
Adding structure-related features such as a distance encoding (Li et al., 2020b) has been demonstrated to make graph representation learning more powerful. For link prediction, (Zhang et al., 2020) propose the labeling trick extending distance encoding and making GNNs learn better link representations.
We first recall the definitions from (Zhang et al., 2020) introducing the concept of labeling trick. Consider an undirected graph G as described in Section 3. In addition, the tensor A ∈ Rn×n×k contains all node and edge features (if available). The diagonal components Av,v,: denote the node features, while the off-diagonal components Au,v,: denote the edge features of edge (u, v). The labeling trick uses a target node set S ⊆ V and a labeling function to label all nodes in the node set V and stack the labels with A. A valid labeling trick must meet two conditions: (1) the nodes in S have different labels from the rest of the nodes, (2) the labeling function must be permutation invariant.
Let us recall our method using multiple virtual nodes. Assume we have multiple virtual nodes S = {s1, ..., sm}. ∀u ∈ V , we have the additional features for the node l(u|S) = (h(s1), ..., h(sm))
T(γ(u|s1), ..., γ(u|sm)), where γ(u|si) = 1 if u is connected to the virtual node si, and γ(u|vi) = 0 otherwise. h(si) is the node representation of virtual node si, and is initialized by one-hot vectors so that each virtual node has different labels.
Our labeling strategy is not a valid labeling trick by the definition of (Zhang et al., 2020). First, S is not a subset of V , and we use addition instead of concatenation. Even if we extend V to V ∪ S, our labeling strategy still does not fit the permutation-invariant requirement. Nevertheless, it can achieve similar effects in learning structural link representations. Theorem 1. In any non-attributed graphs with n nodes, if the degree of each node in the graph is between 1 and O(log 1− 2h (n)) for any constant > 0, given m virtual nodes which evenly divide the
node set into m clusters, then there exists ω ( (m− 1)2(n m − 1) 3 )
many pairs of non-isomorphic links (u,w), (v, w), such that an h-layer 1-WL-GNN (see definitions in (Li et al., 2020b) and (Zhang et al., 2020); one well-known example is GIN (Xu et al., 2019b)) gives u, v the same representation, while using m virtual nodes can give u, v different representations.
Proof. The proof can be separated into two steps. The first step is to prove that there exists n/o(n1− ) = ω(n ) many nodes that are locally h-isomorphic. This step is same as the proof of Theorem 2 in (Zhang et al., 2020), so we omit the details here. After getting these locally isomorphic nodes, we denote the set of these nodes as Viso. The second step is to find the non-isomorphic links.
Step 2. Let us partition Viso = ∪i=1Vi where Vi is the subset of nodes connected to virtual node si. To be simple, we call each Vi a cluster, and the sizes of different clusters are assumed to be the same |Vi| = |Viso|/m. Consider two nodes u ∈ Vi and v ∈ Vj from different clusters. Since both of them are in Viso, so they have identical h-hop neighborhood structures, and h-layer 1-WL-GNN will give them the same representations. Then let us select another node w in Vi, h-layer 1-WL-GNN will also make (u,w) and (v, w) have the same representation.
However, if we use virtual nodes to label nodes and give them additional features, because u,w are in the same cluster while v, w belong to different clusters, (u,w) will have different representation from (v, w). Now let us count the number of such non-isomorphic link pairs Y , we can have:
Y ≥ m∏
i,j=1,j 6=i
|Vi||Vi − 1||Vj | = 1
2 m(m− 1) (( |Viso| m − 1 )( |Viso| m )2)
Taking |Viso| = ω(n ) into the above in-equation, we get
Y ≥ 1 2 m(m− 1)ω
( ( n
m − 1)3
) = ω ( (m− 1)2(n
m − 1)3 ) Example (Power of Using Multiple Virtual Nodes). In Figure 4, we show two cases with and without virtual nodes. Consider the nodes v2, v3 with the same local structure, which means they can get the same node representations by using 1-WL-GNN. So we cannot discriminate the links (v1, v2) and (v1, v3) if we just use 1-WL-GNN and concatenate the node representations for link prediction. However, if we add 2 virtual nodes and add extra features to each node. v1 and v2 get a new feature (1, 0), v3 get new feature (0, 1). So it is easy to see (v1, v2) and (v1, v3) now have different representations.
C ADDITIONAL DETAILS ON THE DATA
See Table 5 for the datasets we consider additionally in the appendix.
D MODEL CONFIGURATIONS AND TRAINING
We trained all models for 80 runs using the Bayesian optimization provided by wandb4 and the following hyperparameters.
hidden dimension 32, 64, 128, 256 learning rate 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001 dropout 0, 0.3, 0.6 # of layers 1-7 # of virtual nodes (random) 1-10 # of virtual nodes 1,2,4,8,16,32,64 SGC - K 2-7 APPNP - α 0.05, 0.1, 0.2, 0.3 GNN-GDC - k 64, 128 GNN-GDC - α 0.05, 0.1, 0.2, 0.3
Please note that we considered the wide ranges of values only in order to find a good general setting. For practical usage a hidden dimension of 256, learning rate of 0.0001, and dropout of 0.3 should work well; only on the small graphs a dropout of 0 might work better. As usual, the number of layers depends on the type of data; however, note that the virtual nodes make it possible to use more that then the usual 2-3 layers. Generally, higher numbers of virtual nodes work better, in line with our theoretical results.
Also note that we used less virtual nodes in the selection for the models (-RM, -RMF ) since especially -RMF was very slow and preliminary results showed that larger numbers did not change the results greatly – probably due to the randomness. We used maximally 64 virtual nodes due to memory issues with larger numbers (e.g., 128), especially on the larger datasets. We report the specific numbers of GNN layers and virtual nodes used by the trained models from Tables 2, 8, and 3 in Table 6. For the first clustering in GNN-CM+, we created 150 clusters on cora and pubmed, 200 clusters on ddi and collab, and 1000 on ppa10.
We tuned all models for 80 runs, and thereafter ran the models with the best 3 configurations for 3 runs and chose the best of these model as the final model (configuration).
We trained as suggested by the OGB (e.g., the splits, negative sampling) but used a batch size of 212 and sometimes adapted the number of runs due to lack of resources; we used 3 for the experiments on collab and ppa10 in Table 2. However, we ran several of our models for 10 runs as required for results on the OGB leaderboards and the numbers are comparable (see Table 10).
4https://wandb.ai/site
We used 500 epochs with a patience of 30. Furthermore, for collab, we used the validation edges during testing (OGB contains both settings, with and without them).
E ADDITIONAL EXPERIMENTAL RESULTS
E.1 RESULTS ON ppa
The ppa dataset is challenging in both its size and density. Since we were missing the resources to run experiments for all baselines on this dataset, we compare our best models (trained only on ppa10, we did not do additional hyperparameter tuning) to the OGB leaderboard in Table 7. For GCN, we see that our virtual node approach is able to improve the results considerably – even if only trained on 10% on the data.
E.2 RESULTS ON cora
We ran the models also on the small cora data, yet the results confirm our expectation, that virtual nodes for link prediction should be used in challenging graphs. In contrast, for cora, we get already good scores with a regular GCN. See Table 8.
E.3 RUNTIME
We show the runtimes on ddi in Figure 5. Here we see that a single virtual node can have a positive impact at the same time on both prediction scores and efficiency, while the clustering takes more time.
E.4 COMPARISON OF CLUSTERING ALGORITHMS
See Table 9 and analysis in the main paper.
E.5 ADDITIONAL RUNS FOR collab
Table 10 compares several 10-run averages over collab to the 3-run averages. The numbers are stable.
F CLUSTER ANALYSIS
We computed additional statistics about our “virtual node clusters” (i.e., a cluster represents a set of nodes connected to the same virtual node). Our hypotheses was that our proposed clustering based on
the graph structure better reflects the distribution of test edges than, for example, random clustering. We report the results in Table 11. For the -RMF and -CM+ models we report two numbers. The upper one shows the average number of intra-cluster test edges over 10 runs. The numbers in the lower part distinguish the actual edges and reflect how many different test edges occur in a cluster over the 10 runs. These numbers hence represent lower and upper bounds respectively.
As expected, the numbers for -CM are in between those bounds. For ddi, we see that the -CM+ and -RMF numbers are very similar, while the ones for -CM+ are much better over collab and ppa10.
G INVESTIGATION OF NODE EMBEDDINGS
We also investigated the embeddings of the virtual nodes and compared them to the ones of the regular grapoh nodes, but we could not derive many conclusions. The main finding is that the virtual node embeddings are much more diverse than the mean of the embeddings in corresponding cluster – we would have expected them to be similar.
H DETAILS ABOUT P-GNN
The model closest to our approach is the position-aware graph neural network (P-GNN) (You et al., 2019). It assigns nodes to random subsets of nodes called “anchor-sets”, and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. That is, it creates a message for each node for every anchor-set, instead of for each direct neighbor.
We ran experiments with P-GNN but did not obtain conclusive results. It did not run on the larger datasets. For ddi, we considered the number of anchor nodes as hyperparameter since the fixed choice of 64 from the experiments of (You et al., 2019) did not yield good results. However, larger numbers such as 128 or 512 resulted in very large runtimes (9 hrs / epoch). The result in Table 3 is an intermediate best value after 50 runs. We contacted the authors and they indeed mentioned that the model is not very scalable and suggested to use just the anchor-set distance as additional features, instead of overtaking the adapted message passing as well. We did not do this extra experiment since the SAGE +dist model, whose numbers we report, follows a similar approach. | 1. What is the focus of the paper regarding virtual nodes in link prediction?
2. What are the strengths of the paper, particularly in terms of experimental evaluation and theoretical analysis?
3. Do you have any concerns or questions about the paper, such as deciding the number of virtual nodes for practical use? | Summary Of The Paper
Review | Summary Of The Paper
This paper analyses the roles of virtual nodes in the link prediction problem. Extensive experiments are conducted to support the claims and show that virtual nodes can improve the link prediction performance of GNN.
Review
Strengths:
I agree with the paper that virtual nodes lack a better understanding. It is good to see that extensive experiments are conducted to evaluate several virtual node strategies and GNN-CM can improve performance in many cases.
The paper theoretically analyses the effect of virtual nodes on influence distributions
Concerns:
I'm particularly interested in how to decide the number of virtual nodes because it is important for practical use. In Table 6 in the appendix, GNN-CM uses a high number of virtual nodes, but in Figure 3 the trend seems different. In Figure 3, on ddi, GNN-CM with 2 virtual nodes works the best. Is there any practical guidance?
By the way, some sentences need to be checked. For example, at the beginning of section 4, "......a relationship that might strongly influenced depend on surrounding relations" |
ICLR | Title
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
Abstract
It is well known that the graph classification performance of graph neural networks often improves by adding an artificial virtual node to the graphs, which is connected to all nodes in the graph. Intuitively, the virtual node provides a shortcut for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes with other problems is still an open research question. In this paper, we adapt the concept of virtual nodes to the link prediction scenario, where we usually have much larger, often dense, and more heterogeneous graphs. In particular, we use multiple virtual nodes per graph and graph-based clustering to determine the connections to the graph nodes. We also investigate alternative clustering approaches (e.g., random or more advanced) and compare to the original model with a single virtual node. We conducted extensive experiments over different datasets of the Open Graph Benchmark (OGB) and analyze the results in detail. We show that our virtual node extensions yield rather stable performance increases and allow standard graph neural networks to compete with complex state-of-the-art models, as well as with the models leading the OGB leaderboards.
1 INTRODUCTION
Link prediction is an important task to complete graphs that are missing edges in various domains: citation networks (Kipf & Welling, 2016), social networks (Adamic & Adar, 2003), medical drug interaction graphs (Abbas et al., 2021), or knowledge graphs (KGs) (Ji et al., 2021). Numerous kinds of models have been proposed to solve the link prediction problem, ranging from KG-specific predictors (Ji et al., 2021) to graph neural networks (GNNs) (Kipf & Welling, 2016; Zhang & Chen, 2018). Over dense biomedical networks, GNNs turned out to work especially well (Hu et al., 2020).
In this work, we focus on graph neural networks for link prediction. Many of the popular GNNs are based on the message-passing scheme, which computes node embeddings based on iteratively aggregating the features of (usually direct/one-hop) neighbor nodes along the graph edges (Gilmer et al., 2017). Interestingly, best performance is usually obtained by only considering two to three hops of neighbors (i.e., 2-3 layers in the GNN). One main reason identified for this is over-smoothing, the problem that node representations become indistinguishable when the number of layers increases (Li et al., 2018). The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies (Alon & Yahav, 2021). While it is likely that link prediction most often depends on the local node neighborhood, it is not beyond imagination that there are critical long-range dependencies (e.g., complex chains of drug-drug or drug-protein interactions). Hence, using a small number of layers to overcome the above problems results in under-reaching.
There have been several recent proposals to overcome under-reaching. On the one hand, several works propose techniques that allow for larger numbers of GNN layers (Xu et al., 2018; Wu et al., 2019; Liu et al., 2020; Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the link prediction experiments in these works consider citation or recommendation networks, but not the especially dense biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data. On the other hand, there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood: based on graph diffusion (Atwood & Towsley, 2016; Klicpera et al., 2019a; Abu-El-Haija et al., 2019; Xu et al., 2019a; Ma et al., 2020; Klicpera et al., 2019b) and other theories (Morris et al., 2019; You
et al., 2019). However, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) (Hu et al., 2020), several ran out of memory. Moreover, the majority has not considered link prediction, while this problem was recently shown to be more difficult than node classification (Zhang et al., 2020).
In this paper, we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Virtual nodes are well known to often improve the graph classification performance of graph neural networks, where an artificial virtual node is added to every graph and connected to all nodes in the graph. While the virtual nodes were originally thought as representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes for the link prediction problem has not been investigated yet. The reason for this might be that the often very large and heterogeneous “network” graphs in link prediction are of very different nature and require novel/adapted solutions (e.g., a protein interaction network may easily contain millions of nodes, whereas a molecule to be classified contains usually less than fifty).
We explore application and effects of virtual nodes in link prediction theoretically and empirically:
• We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes. Consider Figure 1. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node. In this way, under-reaching is decreased because clustered nodes can share information easily; at the same time, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model). • We also investigate alternative methods to determine the virtual node connections (e.g., randomization in clustering) and compare to the original model with a single virtual node. • We theoretically investigate the benefit of using (multiple) virtual nodes in terms of two aspects: influence score and the expressiveness in learning a structural link representation. • We conducted extensive experiments over challenging datasets of different type, provide ablation studies that confirm the superiority of our proposed techniques, analyze the results in detail, and provide first guidelines about how to use virtual nodes with different types of data and GNNs. • Most importantly, we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing, as well as with the models leading the OGB leaderboards.
2 RELATED WORK
We give an overview on approaches that are similar from a technical perspective; for a more detailed summary, see Appendix A. For a more general overview of the large and diverse field of link prediction, we refer to good summaries in recent works (Martínez et al., 2016; Zhang et al., 2020).
Deeper GNNs. Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching. These models range from the simple but efficient message propagation in SGC (Wu et al., 2019; Liu et al., 2020) and APPNP (Klicpera et al., 2019a) and connections in JKNet (Xu et al., 2018), to more advanced proposals (Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a) such as the differentiable aggregation functions in DeeperGCN (Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the experiments in most of these works consider citation or recommendation networks, but not the especially dense and important biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data.
Beyond One-Hop Neighbors. Recently, graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. Atwood & Towsley (2016) use k-hop random walks to extend the node features. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models concatenate (Abu-El-Haija et al., 2019) or aggregate (Xu et al., 2019a; Ma et al., 2020) node embeddings in every layer using a diffusion-based transition matrix. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020) and attention (Wang et al., 2020). Morris et al. (2019) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm. All the above approaches are relatively complex, many terminated with memory errors in our experiments, and few have been evaluated for link prediction.
Virtual Nodes. To the best of our knowledge, virtual nodes have only been considered in the context of graph classification so far, where a single virtual node (also called supernode) is added to the graph to be classified and connected to all graph nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction (i.e, via edges from the graph nodes) instead of bidirectionally (Li et al., 2017).
There are some GNNs which point out special nodes that we could consider as “virtual”. Fey et al. (2020) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based, molecule-specific algorithm and then applies message passing within and between these clusters. The graph-partition based message passing from Liao et al. (2018) also used clustering, but it just divides the original messages into inter- and intra-cluster. Our approach creates new “paths” in the graph and we theoretically demonstrate its expressiveness. P-GNN (You et al., 2019) assigns nodes to random clusters (“anchor-sets”) and then creates a message for each node for every anchor-set, while ignoring the message passing from original direct neighbors. Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors. We also explore the idea of similar random assignments in our context, but show that more elaborate techniques generally work better. Most importantly, we do not propose a specific, new GNN but a new technique for augmenting existing graph neural networks.
Although it is a well-known trick, the advantage of using virtual nodes has never been theoretically investigated nor fully understood. We focus on link prediction and considerably extend the virtual node technique. There are commonalities in the advantages of using virtual nodes for graph classification and link prediction, but their role in link prediction is to improve the representation of the link instead of the graph (nodes). We analyze theoretically and empirically how they improve GNN performance.
3 PRELIMINARIES
Link Prediction. We consider an undirected graphG = (V,E) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation. All our techniques work for directed graphs and, with simple adaptation, also for graphs with labelled edges. We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors. Given two nodes, the link prediction task is to predict whether there is a link between them.
Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described by Gilmer et al. (2017). These networks compute for every v ∈ V a node representation h`v at layer ` ∈ [1, 2, . . . , k], by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h`−1v as below; h 0 v are the initial node features.
h`v = COMBINE ` ( h`−1v , AGGREGATE ` ( {h`−1u | u ∈ Nv} )) (1)
Link prediction with GNNs is usually done by combining (e.g., concatenating) the final representations hLu , h L v , of the nodes u, v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. We follow this approach.
We further use [1, n] to denote an interval [1,2,. . . , n].
4 VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION
So far, virtual nodes have been only used for graph classification. Link prediction scenarios are different in that the graphs are usually very large, heterogeneous, sometimes dense, and the task is to predict a relationship that might strongly be influenced depending on surrounding relations. In the following, we propose approaches that fit these scenarios.
4.1 MULTIPLE VIRTUAL NODES
Our main goal of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes S = {s1, s2 . . . , sn}1 each being connected to a subset of graph nodes, as determined by an assignment σ : V → [1, n]; n is considered as hyperparameter. We propose different methods to obtain σ:
Random (GNN-RM). Most simple, we can determine a fixed σ randomly once with initialization.
Increased Randomness (GNN-RMF ). Similarly a random assignment, but initialized with every forward pass. In this way, a single bad assignment does not determine the overall performance.
Clustering (GNN-CM). Many types of graph data incorporate a certain cluster structure (e.g., collaboration or social networks) that reflects which nodes belong closely together. We propose to connect such nodes in a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment σ. More precisely, during initialization, we use a generic clustering algorithm which, given a number m, creates a set C = {C1, C2 . . . , Cm} of clusters (i.e., sets of graph nodes) by computing an assignment ρ : V → [1,m], assigning each graph node to a cluster. We then obtain σ by choosing m = n and σ = ρ.
In this work, we decided for the METIS clustering (Karypis & Kumar, 1998) which turned out to provide a good trade off between quality and efficiency. Nevertheless, our idea is generic and can be applied with arbitrary algorithms. We will show ablation experiments for alternatives (e.g., Graclus (Dhillon et al., 2007) and Diffpool (Ying et al., 2018b)).
Advanced Clustering (GNN-CM+). Not every type of graph data contains an inherent cluster structure or one that is sufficiently expressed. Furthermore, using a fixed clustering, we obtain a deterministic algorithm again taking the risk that we completely rely on a single, possibly not ideal, virtual node assignment – there may be critical long range dependencies that go beyond clusters. For these cases, we propose an alternative approach, which breaks up the determinism by extending the above clustering as follows. We choose a relatively large m, with m n, and apply the above clustering algorithm, during initialization. Then, in each epoch, we randomly guess an assignment σ′ : [1,m]→ [1, n] of clusters to virtual nodes and define σ(n) := σ′(ρ(n)). Note that this approach is inspired by Chiang et al. (2019), who apply a similar technique is to create batches based on clusters. Further note that we determine σ′ with every epoch instead of every forward pass since the the computation takes quite some time on large datasets and we observed that this yields good results.
4.2 THE MODEL
We integrate the multiple virtual nodes into a generic message-passing graph neural network by extending the approach from Hu et al. (2020) to the setting with multiple virtual nodes, by computing node representations h`v for a node v ∈ V at layer ` as follows:
h`si = COMBINE ` VN ( h`−1si , AGGREGATE ` VN ( {h`−1u | u ∈ V, σ(u) = i} )) (2)
h`v = COMBINE ` ( h`−1v +h ` sσ(v) , AGGREGATE` ( {h`−1u | u ∈ Nv} )) (3)
Note that the highlighted adaptation of the standard GNN processing from Equation (1) is only minor – but powerful. In our implementation, COMBINE`VN is addition combined with linear layers and layer normalization, and we use sum for AGGREGATE`VN .
1Since notation V is standard for nodes, we use S for the set of virtual nodes. Think of “supernodes”.
4.3 ANALYSIS: VIRTUAL NODES CHANGE INFLUENCE
Influence Score. Following (Xu et al., 2018; Klicpera et al., 2019a), we measure the sensitivity (also, influence) of a node x to a node y by the influence score I(x, y) = eT ∂h k x
∂h0y ; e is a vector of all ones,
hkx is the embedding of x at the k th layer, see Equations (1) and (3). For a k-layer GNN, the influence score is known to be proportional in expectation to the k-step random walk distribution from x to y:2
E[I(x, y)] ∝ Prw(x→ y, k) = ∑ r∈Rk k∏ `=1
1
deg(v`r) , (4)
(v0r , v 1 r , ..., v k r ) are the nodes in the path r from x := v 0 r to y := v k r , R k is the set of paths of length k. In what follows, we will exploit this relationship and argument in terms of the probability Prw.
Virtual Nodes. For simplicity, consider the influence score in an m-regular graph; there we have Prw(x→ y, k) = |R k| mk
. We hypothesize that we can come to similar conclusions in a general graph with average degree m. Consider the message passing between two distant nodes x and y. (I) In case the shortest path from x to y is of length > k, a k-layer GNN cannot capture it, and the probability Prw(x→ y, k) is obviously zero. If we then consider virtual nodes in the GNN layer (even with only one), we can pass messages from x to y through the virtual nodes and obtain a nonzero probability. (II) Consider the case where there is a shortest path of length ≤ k between x and y. By adding a virtual node s in one GNN layer, the probability changes to:
P srw(x→ y, k) = Prw(x→ y, k) + Prw(x→ s, s→ y) = |Rk|
(m+ 1)k +
1
(m+ 1)|V | . (5)
Compared to the original probability, we get the following impact ratio for using virtual nodes:
ir = mk
(m+ 1)k +
mk
(m+ 1)|V ||Rk| . (6)
Whenm is large enough, ir can be approximated by ir ' ( 1 + m k−1
|V ||Rk|
) . Here, we see that the impact
of virtual nodes grows when m increases. Our experiments confirm this theoretical observation.
Multiple Virtual Nodes. In view of multiple virtual nodes, the above analysis gets even more appealing. We continue along these lines and assume there is a shortest path of length ≤ k between x and y. If x and y connect to the same virtual node s, then Equation (5) changes as follows:
P srw(x→ y, k) = |Rk|
(m+ 1)k +
1
(m+ 1)|Cs| . (7)
Since the set Cs of nodes connecting to s is much smaller than V , the impact of multiple virtual nodes is greater than that of a single virtual node. On the other hand, if x and y do not connect to the same virtual node, the probability just slightly decreases from |R
k| mk to |R k| (m+1)k .
In Appendix B, we further show that using multiple virtual nodes is related to (but not equal to) the labeling trick (Zhang et al., 2020) and distance encoding (Li et al., 2020b), and it can theoretically improve the expressiveness in learning structural link representations (see Theorem 1 and Figure 4(b)).
5 EVALUATION
We conducted extensive experiments and ablation studies to empirically investigate:
• How does the existing approach with one virtual node perform in link prediction? • Do multiple virtual nodes improve performance, how do our proposed approaches compare? • In particular, are approaches based on the graph structure better? • How exactly do virtual nodes support link prediction? When do they help particularly?
2See Theorem 1 in (Xu et al., 2018). Note that the theorem makes some simplifying assumptions (e.g., on the shape of GNN).
types/amounts; (second) best results are (light) gray , overall best bold, second best underlined.
ddi ppa10 collab pubmed Hits@20 Hits@100 Hits@50 Hits@20
GCN 0.4076 ± 0.1073 0.1313 ± 0.0084 0.4955 ± 0.0064 0.9675 ± 0.0143 - VN 0.6217 ± 0.1241 0.1258 ± 0.0082 0.5049 ± 0.0088 0.9579 ± 0.0214 - RM 0.5532 ± 0.1262 0.1205 ± 0.0059 0.5083 ± 0.0109 0.9522 ± 0.0110 - RMF 0.5830 ± 0.0855 0.1116 ± 0.0094 0.5046 ± 0.0049 0.8100 ± 0.0781 - CM 0.6105 ± 0.1563 0.1299 ± 0.0050 0.5181 ± 0.0076 0.9575 ± 0.0230 - CM+ 0.6033 ± 0.1759 0.1399 ± 0.0071 0.5128 ± 0.0129 0.9189 ± 0.0514 SAGE 0.6173 ± 0.1068 0.1024 ± 0.0050 0.5662 ± 0.0149 0.9779 ± 0.0105 - VN 0.6491 ± 0.1360 0.0853 ± 0.0154 0.5875 ± 0.0091 0.9659 ± 0.0333 - RM 0.7068 ± 0.1174 0.1131 ± 0.0039 0.5830 ± 0.0087 0.9433 ± 0.0208 - RMF 0.7564 ± 0.1055 0.1105 ± 0.0023 0.6067 ± 0.0063 0.9800 ± 0.0087 - CM 0.7621 ± 0.1157 0.1077 ± 0.0150 0.6056 ± 0.0105 0.9834 ± 0.0068 - CM+ 0.8251 ± 0.0678 0.0963 ± 0.0099 0.5940 ± 0.0262 0.9754 ± 0.0139 GIN 0.4321 ± 0.1353 0.1139 ± 0.0058 0.5768 ± 0.0179 0.9234 ± 0.0166 - VN 0.5260 ± 0.1227 0.1316 ± 0.0049 0.5863 ± 0.0254 0.9790 ± 0.0070 - RM 0.5084 ± 0.1324 0.1337 ± 0.0045 0.5412 ± 0.0174 0.9604 ± 0.0158 - RMF 0.5310 ± 0.1453 0.1269 ± 0.0026 0.5335 ± 0.0087 0.7986 ± 0.0993 - CM 0.5664 ± 0.0860 0.1349 ± 0.0034 0.5821 ± 0.0081 0.9125 ± 0.0378 - CM+ 0.4339 ± 0.1855 0.1591 ± 0.0069 0.5557 ± 0.0026 0.9037 ± 0.0262
Datasets. We focused on challenging data from the OGB: ddi, a drug-drug interaction network; ppa10, a subset of the protein-protein association network ppa containing only 10% of the train edges (but full valid/test); and collab, an author collaboration network. To learn more about smaller data of similar type, we also tested on the citation networks pubmed (Yang et al., 2016). Since the datasets are not only very different in type but also in various other critical graph parameters and this is reflected in the performance of the models, we show relevant statistics in Table 1.3 The datasets vary strongly in size with ddi being smallest among the biomedical; on the other hand, ddi is very dense. The clustering coefficient intuitively reflects the “cliquishness” of the graph’s subgraphs. The large diameters suggest that the data suits testing under-reaching. Appendix C gives further details and describes datsets we consider in additional experiments in the appendix.
Baselines For a competitive comparison, we considered important baselines (described in Section 2):
• The deep GNNs SGC, APPNP, DeeperGCN, and two variants of JKNet. • Approaches extending message passing: P-GNN, APPNP, GCN-GDC, SAGE-GDC, GIN-GDC. • The popular GNNs GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), and GIN (Xu
et al., 2019b), which we then extend with (multiple) virtual nodes.
3See Tables 2 and 3 in (Hu et al., 2020). We computed the numbers for ppa10 (which we focus on due to a lack of resources), and pubmed using the same techniques.
5.1 RESULTS
Overall Impact of Virtual Nodes, Tables 2, 7, 8 (Appendix). We compare to GCN, SAGE, and GIN. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi, slight improvements over collab, but no definitive ones over ppa10; over pubmed, it works very well for GIN. The numbers for GNN-RM and GNN-RMF reflect the randomness of their connections to the virtual nodes, there is no clear trend. Nevertheless, they clearly outperform the original models, with only few exceptions. The increased randomness by re-assigning the virtual nodes with every forward pass (GNN-RMF ) seemingly suits SAGE but not the others. As expected, over the small pubmed/cora, which also have no cluster structure, the results are not consistent or convincing overall; virtual nodes only yield improvement sometimes, and none for GCN. Yet, on the more challenging datasets, multiple virtual nodes turn out to be an efficient means to boost the link prediction performance of GNNs if they are applied correctly. Our virtual node connections based on the graph structure (GNN-CM) yield consistently good improvements over ddi and collab, and mostly help on the challenging ppa10 dataset. On collab, we did further experiments using GAT (Veličković et al., 2017) and also observe a clear performance gain: 0.4745 vs. 0.5876 (GAT-CM). GNN-CM and GNN-CM+ are not always the best ones, but yield reliably good results, in contrast to the other models with virtual nodes (see variability of gray shades). Interestingly, the advanced clustering yields especially good performance over ppa10/ppa, while its results on the other datasets are not convincing. Generally, the improvements of the virtual node models are strongest on ddi. For an in-depth result analysis see Section 5.2.
Comparison to Related Works and SOTA, Table 3. Most deep GNNs as well as the models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. However, most of the original evaluations focus on node or graph classification and consider very different types of data – often the standard citation networks (Lu & Getoor, 2003) – and, in fact, on collab we see the best numbers. For a more detailed discussion of P-GNN see Appendix H. Even if we assume that these numbers can be improved, the models do not seem apt for link prediction; in particular, the complex ones: many do not run at all on realistic link prediction data but yield memory errors. Further, our virtual node extensions make standard GNNs competitive to the models on the leaderboard. In particular, their performance is much more stable. The results of the best models from the leaderboard vary strongly with the different datasets, or have not been reported at all. None of these models can be called “good” overall, given the numbers in the - sometimes even missing - rest of the table; in fact, SEAL and Adamic Adar perform rather bad on the very dense ddi.
Impact of Virtual Nodes on Number of GNN Layers and Efficiency, Figure 2. For the virtual nodes models, the scores increase with the number of layers for a longer time, GCN drops earlier. On ddi, GCN-VN and -CM reach their best scores at 6 and 8 layers, respectively, which is remarkable for that very dense dataset, which is prone to over-smoothing. On collab it is the other way around. The figure also gives an idea about the runtime increase with using virtual nodes. It compares the 6-layer models, and shows the 4-layer GCN-CM which obtains performance similar to the 6-layer GCN-VN.
Impact of Virtual Node Number, Figure 3. First, consider the configurations of the best models for the overall results in Table 2, which are provided in Table 6 in the appendix. Here, we see that the chosen numbers of virtual nodes are indeed random for the “random” models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis in Section 4.3. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes. Note that there is a trade off between number of virtual nodes and intra-cluster test edges, discussed in Section 5.2.
Using Virtual Nodes Only at the Last GNN Layer, Table 4. Alon & Yahav (2021) show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we see that this can lead to extreme performance drops.
Impact of Clustering Algorithm, Table 9 (Appendix). Our architecture is generic in the clustering algorithm, and we investigated the effects of varying that. Graclus is similar in nature to METIS in that it also creates partitions based on the adjacency matrix, but it took much longer to run. Diffpool considers the node features and yields improvements for GCN, but does not scale to larger datasets. Over ddi, there is no clear winner and, given its efficiency, METIS turns out to be a good solution.
5.2 DISCUSSION AND CONCLUSIONS
The results show that our approach with multiple virtual nodes based on graph-based clustering yields performance increases for various GNNs and types of data, but there are clear differences.
Dense Graphs with Medium/High Clustering Coefficient. Over ddi, we see strongest improvements for all virtual-node models. This can be explained by our proposed theory, showing that a very large node degree m increases the impact of the virtual node(s), and thus decreases the negative impact of the (too) many other neighbors (see Equation (6)). Furthermore, the empirical results confirm our proposed theory regarding multiple virtual nodes (see Equation (7)). We see particularly good numbers for GNN-CM, which exploits the clustering inherent in the given graph. GNN-CM+, which considers this given clustering only on a lower level, is shown to perform worse than GNN-CM overall. In fact, we computed the percentage of test edges that occur in the “virtual node cluster” (see Table 11 in the appendix) and it shows that the numbers for the advanced clustering are very similar to the random one, meaning the randomly merged smaller clusters break the data’s structure too much. Interestingly, the experiments show that, even with the dense data that is prone to over-smoothing, virtual nodes make the GNNs score higher with more than the standard 2-3 layers; hence virtual nodes seem to alleviate over-smoothing to some extent, an interesting question for future work.
Graphs with Large Problem Radius and Low Clustering Coefficient. Over ppa10, all GNNs use an unusually high number of layers, which hints at a large problem radius (e.g., GCN, which performs especially well, uses 7 layers). Given the very low clustering of the data in addition, ppa10 represents a special challenge. With the multiple virtual nodes, GNN-CM performs again better than GNN-VN. On the other hand, it does not perform much better than the random models on data without cluster structure. This can be explained by its choice of number of virtual nodes, which is consistently high, but then there are less test edges within a virtual node cluster (see appendix Table 11). We hence see here that the positive effect of having many virtual nodes (recall Equation (7)) cancel out the benefits of clustering. Our advanced clustering, which merges some local clustering with randomness, is able to achieve best results with GCN and GIN (with SAGE, all models perform rather bad over ppa10). This can be explained by the fact that it randomly merges some local clusters – with each epoch anew – and hence allows more messages to pass across “virtual node clusters”. We also did some experiments over the very large ppa, which is denser than ppa10, and see a similar trend.
Sparse Graphs with Low to High Clustering Coefficient. We tested on three citation/collaboration networks of different sizes. Note that, over this data, the problem radius is usually assumed to be rather small (Alon & Yahav, 2021), although the graph diameters are large. We investigated virtual nodes to augment link prediction in large and complex graphs; but we also want to provide insight into the behavior on smaller data. Over pubmed (similarly on cora as shown in the appendix), virtual nodes do not provide any improvement for GCN. For GIN, a single virtual node yields good increases; overall, it usually outperforms the settings with multiple virtual nodes. We hypothesize that this is mainly due to the small graph size and sparsity. In fact, on the larger and denser collab, GNN-CM performs very good for all GNNs. The trends in the models’ performance and the corresponding explanations are similar to those for ddi but much less expressed, probably due to the much smaller node degrees. Yet, the performance is much more stable, possibly because it is larger and not as dense.
Conclusions. We summarize our main findings to provide first guidelines for applying virtual nodes:
• Small + Sparse Graphs: A single virtual node is likely to boost performance of GIN, and virtual nodes should help with SAGE, but probably not with GCN. • Large + Sparse Graphs: If there is cluster structure, GNN-CM should yield stable performance increases. If the problem radius is large or there is few cluster structure, GNN-CM+ is worth a try. • Dense Graphs + Clustering: Multiple virtual nodes (i.e., GNN-CM) likely increase performance.
6 CONCLUSIONS
We propose a simple but elegant graph neural network extension using multiple virtual nodes that may considerably increase link prediction performance. We also advance research by providing theoretical justifications - the very first about applying virtual nodes at all - and by showing their positive impact in various experiments. Future work includes the design of more advanced and scalable architectures, and it would be interesting to further investigate the huge performance increases on dense graphs.
A ADDITIONAL DETAILS ON RELATED WORKS
Deeper GNNs. We mention simpler approaches in the Section 2. More advanced proposals are, for example, based on special features and connections (Chen et al., 2020), community-based normalization of node representations using random clustering (Zhou et al., 2020), boosting techniques (Sun et al., 2021), or differentiable aggregation functions in DeeperGCN (Li et al., 2020a).
Beyond One-Hop Neighbors. Graph diffusion methods (i.e., in graph theory, techniques for spreading information between nodes) are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. (Atwood & Towsley, 2016) use k-hop random walks to aggregate node features and extend the latter by the aggregated ones. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models aggregate node embeddings in every layer, GraphHeat (Xu et al., 2019a) using the heat kernel, PAN (Ma et al., 2020) the transition matrix of maximal entropy random walks, and PinSage (Ying et al., 2018a) using random walks. (Abu-El-Haija et al., 2019) propose to concatenate embeddings aggregated using the transition matrices of k-hop random walks before applying one-hop neighbor aggregation. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020). Recently, (Wang et al., 2020) integrated attention with diffusion-based message propagation.
Position Encodings. Our approach provides a kind of positional embedding (Srinivasan & Ribeiro, 2019) and hence has some commonalities with models extending nodes with positional encodings, e.g., (Li et al.).
B ADDITIONAL THEORETICAL RESULTS: STRUCTURAL LINK REPRESENTATION
Adding structure-related features such as a distance encoding (Li et al., 2020b) has been demonstrated to make graph representation learning more powerful. For link prediction, (Zhang et al., 2020) propose the labeling trick extending distance encoding and making GNNs learn better link representations.
We first recall the definitions from (Zhang et al., 2020) introducing the concept of labeling trick. Consider an undirected graph G as described in Section 3. In addition, the tensor A ∈ Rn×n×k contains all node and edge features (if available). The diagonal components Av,v,: denote the node features, while the off-diagonal components Au,v,: denote the edge features of edge (u, v). The labeling trick uses a target node set S ⊆ V and a labeling function to label all nodes in the node set V and stack the labels with A. A valid labeling trick must meet two conditions: (1) the nodes in S have different labels from the rest of the nodes, (2) the labeling function must be permutation invariant.
Let us recall our method using multiple virtual nodes. Assume we have multiple virtual nodes S = {s1, ..., sm}. ∀u ∈ V , we have the additional features for the node l(u|S) = (h(s1), ..., h(sm))
T(γ(u|s1), ..., γ(u|sm)), where γ(u|si) = 1 if u is connected to the virtual node si, and γ(u|vi) = 0 otherwise. h(si) is the node representation of virtual node si, and is initialized by one-hot vectors so that each virtual node has different labels.
Our labeling strategy is not a valid labeling trick by the definition of (Zhang et al., 2020). First, S is not a subset of V , and we use addition instead of concatenation. Even if we extend V to V ∪ S, our labeling strategy still does not fit the permutation-invariant requirement. Nevertheless, it can achieve similar effects in learning structural link representations. Theorem 1. In any non-attributed graphs with n nodes, if the degree of each node in the graph is between 1 and O(log 1− 2h (n)) for any constant > 0, given m virtual nodes which evenly divide the
node set into m clusters, then there exists ω ( (m− 1)2(n m − 1) 3 )
many pairs of non-isomorphic links (u,w), (v, w), such that an h-layer 1-WL-GNN (see definitions in (Li et al., 2020b) and (Zhang et al., 2020); one well-known example is GIN (Xu et al., 2019b)) gives u, v the same representation, while using m virtual nodes can give u, v different representations.
Proof. The proof can be separated into two steps. The first step is to prove that there exists n/o(n1− ) = ω(n ) many nodes that are locally h-isomorphic. This step is same as the proof of Theorem 2 in (Zhang et al., 2020), so we omit the details here. After getting these locally isomorphic nodes, we denote the set of these nodes as Viso. The second step is to find the non-isomorphic links.
Step 2. Let us partition Viso = ∪i=1Vi where Vi is the subset of nodes connected to virtual node si. To be simple, we call each Vi a cluster, and the sizes of different clusters are assumed to be the same |Vi| = |Viso|/m. Consider two nodes u ∈ Vi and v ∈ Vj from different clusters. Since both of them are in Viso, so they have identical h-hop neighborhood structures, and h-layer 1-WL-GNN will give them the same representations. Then let us select another node w in Vi, h-layer 1-WL-GNN will also make (u,w) and (v, w) have the same representation.
However, if we use virtual nodes to label nodes and give them additional features, because u,w are in the same cluster while v, w belong to different clusters, (u,w) will have different representation from (v, w). Now let us count the number of such non-isomorphic link pairs Y , we can have:
Y ≥ m∏
i,j=1,j 6=i
|Vi||Vi − 1||Vj | = 1
2 m(m− 1) (( |Viso| m − 1 )( |Viso| m )2)
Taking |Viso| = ω(n ) into the above in-equation, we get
Y ≥ 1 2 m(m− 1)ω
( ( n
m − 1)3
) = ω ( (m− 1)2(n
m − 1)3 ) Example (Power of Using Multiple Virtual Nodes). In Figure 4, we show two cases with and without virtual nodes. Consider the nodes v2, v3 with the same local structure, which means they can get the same node representations by using 1-WL-GNN. So we cannot discriminate the links (v1, v2) and (v1, v3) if we just use 1-WL-GNN and concatenate the node representations for link prediction. However, if we add 2 virtual nodes and add extra features to each node. v1 and v2 get a new feature (1, 0), v3 get new feature (0, 1). So it is easy to see (v1, v2) and (v1, v3) now have different representations.
C ADDITIONAL DETAILS ON THE DATA
See Table 5 for the datasets we consider additionally in the appendix.
D MODEL CONFIGURATIONS AND TRAINING
We trained all models for 80 runs using the Bayesian optimization provided by wandb4 and the following hyperparameters.
hidden dimension 32, 64, 128, 256 learning rate 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001 dropout 0, 0.3, 0.6 # of layers 1-7 # of virtual nodes (random) 1-10 # of virtual nodes 1,2,4,8,16,32,64 SGC - K 2-7 APPNP - α 0.05, 0.1, 0.2, 0.3 GNN-GDC - k 64, 128 GNN-GDC - α 0.05, 0.1, 0.2, 0.3
Please note that we considered the wide ranges of values only in order to find a good general setting. For practical usage a hidden dimension of 256, learning rate of 0.0001, and dropout of 0.3 should work well; only on the small graphs a dropout of 0 might work better. As usual, the number of layers depends on the type of data; however, note that the virtual nodes make it possible to use more that then the usual 2-3 layers. Generally, higher numbers of virtual nodes work better, in line with our theoretical results.
Also note that we used less virtual nodes in the selection for the models (-RM, -RMF ) since especially -RMF was very slow and preliminary results showed that larger numbers did not change the results greatly – probably due to the randomness. We used maximally 64 virtual nodes due to memory issues with larger numbers (e.g., 128), especially on the larger datasets. We report the specific numbers of GNN layers and virtual nodes used by the trained models from Tables 2, 8, and 3 in Table 6. For the first clustering in GNN-CM+, we created 150 clusters on cora and pubmed, 200 clusters on ddi and collab, and 1000 on ppa10.
We tuned all models for 80 runs, and thereafter ran the models with the best 3 configurations for 3 runs and chose the best of these model as the final model (configuration).
We trained as suggested by the OGB (e.g., the splits, negative sampling) but used a batch size of 212 and sometimes adapted the number of runs due to lack of resources; we used 3 for the experiments on collab and ppa10 in Table 2. However, we ran several of our models for 10 runs as required for results on the OGB leaderboards and the numbers are comparable (see Table 10).
4https://wandb.ai/site
We used 500 epochs with a patience of 30. Furthermore, for collab, we used the validation edges during testing (OGB contains both settings, with and without them).
E ADDITIONAL EXPERIMENTAL RESULTS
E.1 RESULTS ON ppa
The ppa dataset is challenging in both its size and density. Since we were missing the resources to run experiments for all baselines on this dataset, we compare our best models (trained only on ppa10, we did not do additional hyperparameter tuning) to the OGB leaderboard in Table 7. For GCN, we see that our virtual node approach is able to improve the results considerably – even if only trained on 10% on the data.
E.2 RESULTS ON cora
We ran the models also on the small cora data, yet the results confirm our expectation, that virtual nodes for link prediction should be used in challenging graphs. In contrast, for cora, we get already good scores with a regular GCN. See Table 8.
E.3 RUNTIME
We show the runtimes on ddi in Figure 5. Here we see that a single virtual node can have a positive impact at the same time on both prediction scores and efficiency, while the clustering takes more time.
E.4 COMPARISON OF CLUSTERING ALGORITHMS
See Table 9 and analysis in the main paper.
E.5 ADDITIONAL RUNS FOR collab
Table 10 compares several 10-run averages over collab to the 3-run averages. The numbers are stable.
F CLUSTER ANALYSIS
We computed additional statistics about our “virtual node clusters” (i.e., a cluster represents a set of nodes connected to the same virtual node). Our hypotheses was that our proposed clustering based on
the graph structure better reflects the distribution of test edges than, for example, random clustering. We report the results in Table 11. For the -RMF and -CM+ models we report two numbers. The upper one shows the average number of intra-cluster test edges over 10 runs. The numbers in the lower part distinguish the actual edges and reflect how many different test edges occur in a cluster over the 10 runs. These numbers hence represent lower and upper bounds respectively.
As expected, the numbers for -CM are in between those bounds. For ddi, we see that the -CM+ and -RMF numbers are very similar, while the ones for -CM+ are much better over collab and ppa10.
G INVESTIGATION OF NODE EMBEDDINGS
We also investigated the embeddings of the virtual nodes and compared them to the ones of the regular grapoh nodes, but we could not derive many conclusions. The main finding is that the virtual node embeddings are much more diverse than the mean of the embeddings in corresponding cluster – we would have expected them to be similar.
H DETAILS ABOUT P-GNN
The model closest to our approach is the position-aware graph neural network (P-GNN) (You et al., 2019). It assigns nodes to random subsets of nodes called “anchor-sets”, and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. That is, it creates a message for each node for every anchor-set, instead of for each direct neighbor.
We ran experiments with P-GNN but did not obtain conclusive results. It did not run on the larger datasets. For ddi, we considered the number of anchor nodes as hyperparameter since the fixed choice of 64 from the experiments of (You et al., 2019) did not yield good results. However, larger numbers such as 128 or 512 resulted in very large runtimes (9 hrs / epoch). The result in Table 3 is an intermediate best value after 50 runs. We contacted the authors and they indeed mentioned that the model is not very scalable and suggested to use just the anchor-set distance as additional features, instead of overtaking the adapted message passing as well. We did not do this extra experiment since the SAGE +dist model, whose numbers we report, follows a similar approach. | 1. What is the focus and contribution of the paper regarding virtual nodes in graph neural networks?
2. What are the strengths of the proposed approach, particularly in its novelty and theoretical analysis?
3. What are the weaknesses of the paper, especially in terms of its motivation and experimental results?
4. Do you have any concerns regarding the significance of virtual nodes in link prediction?
5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates using virtual nodes in graph neural networks for link prediction. Specifically, the authors use a graph clustering algorithm to determine groups of nodes in the graph and adopt multiple virtual nodes in the graph for the link prediction senario. They also theoretically investigate the effect of using virtual nodes for link prediction. Experiments conducted on six datasets provide insights and guidelines about using virtual nodes for link prediction.
Review
Strengths
Unlike most of the proposed methods for GNN focusing on node classification or graph-level tasks, the idea of applying the concept of virtual nodes to link prediction is novel and interesting.
The authors show the relationship between the proposed method and the distance encoding and labeling trick, which are popular techniques for link prediction.
The authors theoretically analyze the effectiveness of virtual nodes in terms of influence scores.
Weakness of the paper:
The motivation of using virtual nodes for link prediction is not clear. The authors only argue that the fact that virtual nodes have not been studied in link prediction is because the large and heterogeneous graphs in link prediction are of very different nature, but do not clearly explain why virtual nodes are important for link prediction.
Even though the authors provide first guidelines about using virtual nodes for link prediction, experimental results are marginal or even worse compared to the results of GNNs w/o virtual nodes. Even in the ddi dataset, which showed great improvement, there is little difference from the model using distance encoding. The authors should show a more significant performance improvement or clearly show problems that using virtual nodes only addresses. |
ICLR | Title
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
Abstract
It is well known that the graph classification performance of graph neural networks often improves by adding an artificial virtual node to the graphs, which is connected to all nodes in the graph. Intuitively, the virtual node provides a shortcut for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes with other problems is still an open research question. In this paper, we adapt the concept of virtual nodes to the link prediction scenario, where we usually have much larger, often dense, and more heterogeneous graphs. In particular, we use multiple virtual nodes per graph and graph-based clustering to determine the connections to the graph nodes. We also investigate alternative clustering approaches (e.g., random or more advanced) and compare to the original model with a single virtual node. We conducted extensive experiments over different datasets of the Open Graph Benchmark (OGB) and analyze the results in detail. We show that our virtual node extensions yield rather stable performance increases and allow standard graph neural networks to compete with complex state-of-the-art models, as well as with the models leading the OGB leaderboards.
1 INTRODUCTION
Link prediction is an important task to complete graphs that are missing edges in various domains: citation networks (Kipf & Welling, 2016), social networks (Adamic & Adar, 2003), medical drug interaction graphs (Abbas et al., 2021), or knowledge graphs (KGs) (Ji et al., 2021). Numerous kinds of models have been proposed to solve the link prediction problem, ranging from KG-specific predictors (Ji et al., 2021) to graph neural networks (GNNs) (Kipf & Welling, 2016; Zhang & Chen, 2018). Over dense biomedical networks, GNNs turned out to work especially well (Hu et al., 2020).
In this work, we focus on graph neural networks for link prediction. Many of the popular GNNs are based on the message-passing scheme, which computes node embeddings based on iteratively aggregating the features of (usually direct/one-hop) neighbor nodes along the graph edges (Gilmer et al., 2017). Interestingly, best performance is usually obtained by only considering two to three hops of neighbors (i.e., 2-3 layers in the GNN). One main reason identified for this is over-smoothing, the problem that node representations become indistinguishable when the number of layers increases (Li et al., 2018). The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies (Alon & Yahav, 2021). While it is likely that link prediction most often depends on the local node neighborhood, it is not beyond imagination that there are critical long-range dependencies (e.g., complex chains of drug-drug or drug-protein interactions). Hence, using a small number of layers to overcome the above problems results in under-reaching.
There have been several recent proposals to overcome under-reaching. On the one hand, several works propose techniques that allow for larger numbers of GNN layers (Xu et al., 2018; Wu et al., 2019; Liu et al., 2020; Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the link prediction experiments in these works consider citation or recommendation networks, but not the especially dense biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data. On the other hand, there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood: based on graph diffusion (Atwood & Towsley, 2016; Klicpera et al., 2019a; Abu-El-Haija et al., 2019; Xu et al., 2019a; Ma et al., 2020; Klicpera et al., 2019b) and other theories (Morris et al., 2019; You
et al., 2019). However, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) (Hu et al., 2020), several ran out of memory. Moreover, the majority has not considered link prediction, while this problem was recently shown to be more difficult than node classification (Zhang et al., 2020).
In this paper, we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Virtual nodes are well known to often improve the graph classification performance of graph neural networks, where an artificial virtual node is added to every graph and connected to all nodes in the graph. While the virtual nodes were originally thought as representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes for the link prediction problem has not been investigated yet. The reason for this might be that the often very large and heterogeneous “network” graphs in link prediction are of very different nature and require novel/adapted solutions (e.g., a protein interaction network may easily contain millions of nodes, whereas a molecule to be classified contains usually less than fifty).
We explore application and effects of virtual nodes in link prediction theoretically and empirically:
• We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes. Consider Figure 1. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node. In this way, under-reaching is decreased because clustered nodes can share information easily; at the same time, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model). • We also investigate alternative methods to determine the virtual node connections (e.g., randomization in clustering) and compare to the original model with a single virtual node. • We theoretically investigate the benefit of using (multiple) virtual nodes in terms of two aspects: influence score and the expressiveness in learning a structural link representation. • We conducted extensive experiments over challenging datasets of different type, provide ablation studies that confirm the superiority of our proposed techniques, analyze the results in detail, and provide first guidelines about how to use virtual nodes with different types of data and GNNs. • Most importantly, we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing, as well as with the models leading the OGB leaderboards.
2 RELATED WORK
We give an overview on approaches that are similar from a technical perspective; for a more detailed summary, see Appendix A. For a more general overview of the large and diverse field of link prediction, we refer to good summaries in recent works (Martínez et al., 2016; Zhang et al., 2020).
Deeper GNNs. Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching. These models range from the simple but efficient message propagation in SGC (Wu et al., 2019; Liu et al., 2020) and APPNP (Klicpera et al., 2019a) and connections in JKNet (Xu et al., 2018), to more advanced proposals (Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a) such as the differentiable aggregation functions in DeeperGCN (Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the experiments in most of these works consider citation or recommendation networks, but not the especially dense and important biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data.
Beyond One-Hop Neighbors. Recently, graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. Atwood & Towsley (2016) use k-hop random walks to extend the node features. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models concatenate (Abu-El-Haija et al., 2019) or aggregate (Xu et al., 2019a; Ma et al., 2020) node embeddings in every layer using a diffusion-based transition matrix. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020) and attention (Wang et al., 2020). Morris et al. (2019) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm. All the above approaches are relatively complex, many terminated with memory errors in our experiments, and few have been evaluated for link prediction.
Virtual Nodes. To the best of our knowledge, virtual nodes have only been considered in the context of graph classification so far, where a single virtual node (also called supernode) is added to the graph to be classified and connected to all graph nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction (i.e, via edges from the graph nodes) instead of bidirectionally (Li et al., 2017).
There are some GNNs which point out special nodes that we could consider as “virtual”. Fey et al. (2020) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based, molecule-specific algorithm and then applies message passing within and between these clusters. The graph-partition based message passing from Liao et al. (2018) also used clustering, but it just divides the original messages into inter- and intra-cluster. Our approach creates new “paths” in the graph and we theoretically demonstrate its expressiveness. P-GNN (You et al., 2019) assigns nodes to random clusters (“anchor-sets”) and then creates a message for each node for every anchor-set, while ignoring the message passing from original direct neighbors. Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors. We also explore the idea of similar random assignments in our context, but show that more elaborate techniques generally work better. Most importantly, we do not propose a specific, new GNN but a new technique for augmenting existing graph neural networks.
Although it is a well-known trick, the advantage of using virtual nodes has never been theoretically investigated nor fully understood. We focus on link prediction and considerably extend the virtual node technique. There are commonalities in the advantages of using virtual nodes for graph classification and link prediction, but their role in link prediction is to improve the representation of the link instead of the graph (nodes). We analyze theoretically and empirically how they improve GNN performance.
3 PRELIMINARIES
Link Prediction. We consider an undirected graphG = (V,E) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation. All our techniques work for directed graphs and, with simple adaptation, also for graphs with labelled edges. We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors. Given two nodes, the link prediction task is to predict whether there is a link between them.
Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described by Gilmer et al. (2017). These networks compute for every v ∈ V a node representation h`v at layer ` ∈ [1, 2, . . . , k], by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h`−1v as below; h 0 v are the initial node features.
h`v = COMBINE ` ( h`−1v , AGGREGATE ` ( {h`−1u | u ∈ Nv} )) (1)
Link prediction with GNNs is usually done by combining (e.g., concatenating) the final representations hLu , h L v , of the nodes u, v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. We follow this approach.
We further use [1, n] to denote an interval [1,2,. . . , n].
4 VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION
So far, virtual nodes have been only used for graph classification. Link prediction scenarios are different in that the graphs are usually very large, heterogeneous, sometimes dense, and the task is to predict a relationship that might strongly be influenced depending on surrounding relations. In the following, we propose approaches that fit these scenarios.
4.1 MULTIPLE VIRTUAL NODES
Our main goal of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes S = {s1, s2 . . . , sn}1 each being connected to a subset of graph nodes, as determined by an assignment σ : V → [1, n]; n is considered as hyperparameter. We propose different methods to obtain σ:
Random (GNN-RM). Most simple, we can determine a fixed σ randomly once with initialization.
Increased Randomness (GNN-RMF ). Similarly a random assignment, but initialized with every forward pass. In this way, a single bad assignment does not determine the overall performance.
Clustering (GNN-CM). Many types of graph data incorporate a certain cluster structure (e.g., collaboration or social networks) that reflects which nodes belong closely together. We propose to connect such nodes in a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment σ. More precisely, during initialization, we use a generic clustering algorithm which, given a number m, creates a set C = {C1, C2 . . . , Cm} of clusters (i.e., sets of graph nodes) by computing an assignment ρ : V → [1,m], assigning each graph node to a cluster. We then obtain σ by choosing m = n and σ = ρ.
In this work, we decided for the METIS clustering (Karypis & Kumar, 1998) which turned out to provide a good trade off between quality and efficiency. Nevertheless, our idea is generic and can be applied with arbitrary algorithms. We will show ablation experiments for alternatives (e.g., Graclus (Dhillon et al., 2007) and Diffpool (Ying et al., 2018b)).
Advanced Clustering (GNN-CM+). Not every type of graph data contains an inherent cluster structure or one that is sufficiently expressed. Furthermore, using a fixed clustering, we obtain a deterministic algorithm again taking the risk that we completely rely on a single, possibly not ideal, virtual node assignment – there may be critical long range dependencies that go beyond clusters. For these cases, we propose an alternative approach, which breaks up the determinism by extending the above clustering as follows. We choose a relatively large m, with m n, and apply the above clustering algorithm, during initialization. Then, in each epoch, we randomly guess an assignment σ′ : [1,m]→ [1, n] of clusters to virtual nodes and define σ(n) := σ′(ρ(n)). Note that this approach is inspired by Chiang et al. (2019), who apply a similar technique is to create batches based on clusters. Further note that we determine σ′ with every epoch instead of every forward pass since the the computation takes quite some time on large datasets and we observed that this yields good results.
4.2 THE MODEL
We integrate the multiple virtual nodes into a generic message-passing graph neural network by extending the approach from Hu et al. (2020) to the setting with multiple virtual nodes, by computing node representations h`v for a node v ∈ V at layer ` as follows:
h`si = COMBINE ` VN ( h`−1si , AGGREGATE ` VN ( {h`−1u | u ∈ V, σ(u) = i} )) (2)
h`v = COMBINE ` ( h`−1v +h ` sσ(v) , AGGREGATE` ( {h`−1u | u ∈ Nv} )) (3)
Note that the highlighted adaptation of the standard GNN processing from Equation (1) is only minor – but powerful. In our implementation, COMBINE`VN is addition combined with linear layers and layer normalization, and we use sum for AGGREGATE`VN .
1Since notation V is standard for nodes, we use S for the set of virtual nodes. Think of “supernodes”.
4.3 ANALYSIS: VIRTUAL NODES CHANGE INFLUENCE
Influence Score. Following (Xu et al., 2018; Klicpera et al., 2019a), we measure the sensitivity (also, influence) of a node x to a node y by the influence score I(x, y) = eT ∂h k x
∂h0y ; e is a vector of all ones,
hkx is the embedding of x at the k th layer, see Equations (1) and (3). For a k-layer GNN, the influence score is known to be proportional in expectation to the k-step random walk distribution from x to y:2
E[I(x, y)] ∝ Prw(x→ y, k) = ∑ r∈Rk k∏ `=1
1
deg(v`r) , (4)
(v0r , v 1 r , ..., v k r ) are the nodes in the path r from x := v 0 r to y := v k r , R k is the set of paths of length k. In what follows, we will exploit this relationship and argument in terms of the probability Prw.
Virtual Nodes. For simplicity, consider the influence score in an m-regular graph; there we have Prw(x→ y, k) = |R k| mk
. We hypothesize that we can come to similar conclusions in a general graph with average degree m. Consider the message passing between two distant nodes x and y. (I) In case the shortest path from x to y is of length > k, a k-layer GNN cannot capture it, and the probability Prw(x→ y, k) is obviously zero. If we then consider virtual nodes in the GNN layer (even with only one), we can pass messages from x to y through the virtual nodes and obtain a nonzero probability. (II) Consider the case where there is a shortest path of length ≤ k between x and y. By adding a virtual node s in one GNN layer, the probability changes to:
P srw(x→ y, k) = Prw(x→ y, k) + Prw(x→ s, s→ y) = |Rk|
(m+ 1)k +
1
(m+ 1)|V | . (5)
Compared to the original probability, we get the following impact ratio for using virtual nodes:
ir = mk
(m+ 1)k +
mk
(m+ 1)|V ||Rk| . (6)
Whenm is large enough, ir can be approximated by ir ' ( 1 + m k−1
|V ||Rk|
) . Here, we see that the impact
of virtual nodes grows when m increases. Our experiments confirm this theoretical observation.
Multiple Virtual Nodes. In view of multiple virtual nodes, the above analysis gets even more appealing. We continue along these lines and assume there is a shortest path of length ≤ k between x and y. If x and y connect to the same virtual node s, then Equation (5) changes as follows:
P srw(x→ y, k) = |Rk|
(m+ 1)k +
1
(m+ 1)|Cs| . (7)
Since the set Cs of nodes connecting to s is much smaller than V , the impact of multiple virtual nodes is greater than that of a single virtual node. On the other hand, if x and y do not connect to the same virtual node, the probability just slightly decreases from |R
k| mk to |R k| (m+1)k .
In Appendix B, we further show that using multiple virtual nodes is related to (but not equal to) the labeling trick (Zhang et al., 2020) and distance encoding (Li et al., 2020b), and it can theoretically improve the expressiveness in learning structural link representations (see Theorem 1 and Figure 4(b)).
5 EVALUATION
We conducted extensive experiments and ablation studies to empirically investigate:
• How does the existing approach with one virtual node perform in link prediction? • Do multiple virtual nodes improve performance, how do our proposed approaches compare? • In particular, are approaches based on the graph structure better? • How exactly do virtual nodes support link prediction? When do they help particularly?
2See Theorem 1 in (Xu et al., 2018). Note that the theorem makes some simplifying assumptions (e.g., on the shape of GNN).
types/amounts; (second) best results are (light) gray , overall best bold, second best underlined.
ddi ppa10 collab pubmed Hits@20 Hits@100 Hits@50 Hits@20
GCN 0.4076 ± 0.1073 0.1313 ± 0.0084 0.4955 ± 0.0064 0.9675 ± 0.0143 - VN 0.6217 ± 0.1241 0.1258 ± 0.0082 0.5049 ± 0.0088 0.9579 ± 0.0214 - RM 0.5532 ± 0.1262 0.1205 ± 0.0059 0.5083 ± 0.0109 0.9522 ± 0.0110 - RMF 0.5830 ± 0.0855 0.1116 ± 0.0094 0.5046 ± 0.0049 0.8100 ± 0.0781 - CM 0.6105 ± 0.1563 0.1299 ± 0.0050 0.5181 ± 0.0076 0.9575 ± 0.0230 - CM+ 0.6033 ± 0.1759 0.1399 ± 0.0071 0.5128 ± 0.0129 0.9189 ± 0.0514 SAGE 0.6173 ± 0.1068 0.1024 ± 0.0050 0.5662 ± 0.0149 0.9779 ± 0.0105 - VN 0.6491 ± 0.1360 0.0853 ± 0.0154 0.5875 ± 0.0091 0.9659 ± 0.0333 - RM 0.7068 ± 0.1174 0.1131 ± 0.0039 0.5830 ± 0.0087 0.9433 ± 0.0208 - RMF 0.7564 ± 0.1055 0.1105 ± 0.0023 0.6067 ± 0.0063 0.9800 ± 0.0087 - CM 0.7621 ± 0.1157 0.1077 ± 0.0150 0.6056 ± 0.0105 0.9834 ± 0.0068 - CM+ 0.8251 ± 0.0678 0.0963 ± 0.0099 0.5940 ± 0.0262 0.9754 ± 0.0139 GIN 0.4321 ± 0.1353 0.1139 ± 0.0058 0.5768 ± 0.0179 0.9234 ± 0.0166 - VN 0.5260 ± 0.1227 0.1316 ± 0.0049 0.5863 ± 0.0254 0.9790 ± 0.0070 - RM 0.5084 ± 0.1324 0.1337 ± 0.0045 0.5412 ± 0.0174 0.9604 ± 0.0158 - RMF 0.5310 ± 0.1453 0.1269 ± 0.0026 0.5335 ± 0.0087 0.7986 ± 0.0993 - CM 0.5664 ± 0.0860 0.1349 ± 0.0034 0.5821 ± 0.0081 0.9125 ± 0.0378 - CM+ 0.4339 ± 0.1855 0.1591 ± 0.0069 0.5557 ± 0.0026 0.9037 ± 0.0262
Datasets. We focused on challenging data from the OGB: ddi, a drug-drug interaction network; ppa10, a subset of the protein-protein association network ppa containing only 10% of the train edges (but full valid/test); and collab, an author collaboration network. To learn more about smaller data of similar type, we also tested on the citation networks pubmed (Yang et al., 2016). Since the datasets are not only very different in type but also in various other critical graph parameters and this is reflected in the performance of the models, we show relevant statistics in Table 1.3 The datasets vary strongly in size with ddi being smallest among the biomedical; on the other hand, ddi is very dense. The clustering coefficient intuitively reflects the “cliquishness” of the graph’s subgraphs. The large diameters suggest that the data suits testing under-reaching. Appendix C gives further details and describes datsets we consider in additional experiments in the appendix.
Baselines For a competitive comparison, we considered important baselines (described in Section 2):
• The deep GNNs SGC, APPNP, DeeperGCN, and two variants of JKNet. • Approaches extending message passing: P-GNN, APPNP, GCN-GDC, SAGE-GDC, GIN-GDC. • The popular GNNs GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), and GIN (Xu
et al., 2019b), which we then extend with (multiple) virtual nodes.
3See Tables 2 and 3 in (Hu et al., 2020). We computed the numbers for ppa10 (which we focus on due to a lack of resources), and pubmed using the same techniques.
5.1 RESULTS
Overall Impact of Virtual Nodes, Tables 2, 7, 8 (Appendix). We compare to GCN, SAGE, and GIN. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi, slight improvements over collab, but no definitive ones over ppa10; over pubmed, it works very well for GIN. The numbers for GNN-RM and GNN-RMF reflect the randomness of their connections to the virtual nodes, there is no clear trend. Nevertheless, they clearly outperform the original models, with only few exceptions. The increased randomness by re-assigning the virtual nodes with every forward pass (GNN-RMF ) seemingly suits SAGE but not the others. As expected, over the small pubmed/cora, which also have no cluster structure, the results are not consistent or convincing overall; virtual nodes only yield improvement sometimes, and none for GCN. Yet, on the more challenging datasets, multiple virtual nodes turn out to be an efficient means to boost the link prediction performance of GNNs if they are applied correctly. Our virtual node connections based on the graph structure (GNN-CM) yield consistently good improvements over ddi and collab, and mostly help on the challenging ppa10 dataset. On collab, we did further experiments using GAT (Veličković et al., 2017) and also observe a clear performance gain: 0.4745 vs. 0.5876 (GAT-CM). GNN-CM and GNN-CM+ are not always the best ones, but yield reliably good results, in contrast to the other models with virtual nodes (see variability of gray shades). Interestingly, the advanced clustering yields especially good performance over ppa10/ppa, while its results on the other datasets are not convincing. Generally, the improvements of the virtual node models are strongest on ddi. For an in-depth result analysis see Section 5.2.
Comparison to Related Works and SOTA, Table 3. Most deep GNNs as well as the models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. However, most of the original evaluations focus on node or graph classification and consider very different types of data – often the standard citation networks (Lu & Getoor, 2003) – and, in fact, on collab we see the best numbers. For a more detailed discussion of P-GNN see Appendix H. Even if we assume that these numbers can be improved, the models do not seem apt for link prediction; in particular, the complex ones: many do not run at all on realistic link prediction data but yield memory errors. Further, our virtual node extensions make standard GNNs competitive to the models on the leaderboard. In particular, their performance is much more stable. The results of the best models from the leaderboard vary strongly with the different datasets, or have not been reported at all. None of these models can be called “good” overall, given the numbers in the - sometimes even missing - rest of the table; in fact, SEAL and Adamic Adar perform rather bad on the very dense ddi.
Impact of Virtual Nodes on Number of GNN Layers and Efficiency, Figure 2. For the virtual nodes models, the scores increase with the number of layers for a longer time, GCN drops earlier. On ddi, GCN-VN and -CM reach their best scores at 6 and 8 layers, respectively, which is remarkable for that very dense dataset, which is prone to over-smoothing. On collab it is the other way around. The figure also gives an idea about the runtime increase with using virtual nodes. It compares the 6-layer models, and shows the 4-layer GCN-CM which obtains performance similar to the 6-layer GCN-VN.
Impact of Virtual Node Number, Figure 3. First, consider the configurations of the best models for the overall results in Table 2, which are provided in Table 6 in the appendix. Here, we see that the chosen numbers of virtual nodes are indeed random for the “random” models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis in Section 4.3. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes. Note that there is a trade off between number of virtual nodes and intra-cluster test edges, discussed in Section 5.2.
Using Virtual Nodes Only at the Last GNN Layer, Table 4. Alon & Yahav (2021) show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we see that this can lead to extreme performance drops.
Impact of Clustering Algorithm, Table 9 (Appendix). Our architecture is generic in the clustering algorithm, and we investigated the effects of varying that. Graclus is similar in nature to METIS in that it also creates partitions based on the adjacency matrix, but it took much longer to run. Diffpool considers the node features and yields improvements for GCN, but does not scale to larger datasets. Over ddi, there is no clear winner and, given its efficiency, METIS turns out to be a good solution.
5.2 DISCUSSION AND CONCLUSIONS
The results show that our approach with multiple virtual nodes based on graph-based clustering yields performance increases for various GNNs and types of data, but there are clear differences.
Dense Graphs with Medium/High Clustering Coefficient. Over ddi, we see strongest improvements for all virtual-node models. This can be explained by our proposed theory, showing that a very large node degree m increases the impact of the virtual node(s), and thus decreases the negative impact of the (too) many other neighbors (see Equation (6)). Furthermore, the empirical results confirm our proposed theory regarding multiple virtual nodes (see Equation (7)). We see particularly good numbers for GNN-CM, which exploits the clustering inherent in the given graph. GNN-CM+, which considers this given clustering only on a lower level, is shown to perform worse than GNN-CM overall. In fact, we computed the percentage of test edges that occur in the “virtual node cluster” (see Table 11 in the appendix) and it shows that the numbers for the advanced clustering are very similar to the random one, meaning the randomly merged smaller clusters break the data’s structure too much. Interestingly, the experiments show that, even with the dense data that is prone to over-smoothing, virtual nodes make the GNNs score higher with more than the standard 2-3 layers; hence virtual nodes seem to alleviate over-smoothing to some extent, an interesting question for future work.
Graphs with Large Problem Radius and Low Clustering Coefficient. Over ppa10, all GNNs use an unusually high number of layers, which hints at a large problem radius (e.g., GCN, which performs especially well, uses 7 layers). Given the very low clustering of the data in addition, ppa10 represents a special challenge. With the multiple virtual nodes, GNN-CM performs again better than GNN-VN. On the other hand, it does not perform much better than the random models on data without cluster structure. This can be explained by its choice of number of virtual nodes, which is consistently high, but then there are less test edges within a virtual node cluster (see appendix Table 11). We hence see here that the positive effect of having many virtual nodes (recall Equation (7)) cancel out the benefits of clustering. Our advanced clustering, which merges some local clustering with randomness, is able to achieve best results with GCN and GIN (with SAGE, all models perform rather bad over ppa10). This can be explained by the fact that it randomly merges some local clusters – with each epoch anew – and hence allows more messages to pass across “virtual node clusters”. We also did some experiments over the very large ppa, which is denser than ppa10, and see a similar trend.
Sparse Graphs with Low to High Clustering Coefficient. We tested on three citation/collaboration networks of different sizes. Note that, over this data, the problem radius is usually assumed to be rather small (Alon & Yahav, 2021), although the graph diameters are large. We investigated virtual nodes to augment link prediction in large and complex graphs; but we also want to provide insight into the behavior on smaller data. Over pubmed (similarly on cora as shown in the appendix), virtual nodes do not provide any improvement for GCN. For GIN, a single virtual node yields good increases; overall, it usually outperforms the settings with multiple virtual nodes. We hypothesize that this is mainly due to the small graph size and sparsity. In fact, on the larger and denser collab, GNN-CM performs very good for all GNNs. The trends in the models’ performance and the corresponding explanations are similar to those for ddi but much less expressed, probably due to the much smaller node degrees. Yet, the performance is much more stable, possibly because it is larger and not as dense.
Conclusions. We summarize our main findings to provide first guidelines for applying virtual nodes:
• Small + Sparse Graphs: A single virtual node is likely to boost performance of GIN, and virtual nodes should help with SAGE, but probably not with GCN. • Large + Sparse Graphs: If there is cluster structure, GNN-CM should yield stable performance increases. If the problem radius is large or there is few cluster structure, GNN-CM+ is worth a try. • Dense Graphs + Clustering: Multiple virtual nodes (i.e., GNN-CM) likely increase performance.
6 CONCLUSIONS
We propose a simple but elegant graph neural network extension using multiple virtual nodes that may considerably increase link prediction performance. We also advance research by providing theoretical justifications - the very first about applying virtual nodes at all - and by showing their positive impact in various experiments. Future work includes the design of more advanced and scalable architectures, and it would be interesting to further investigate the huge performance increases on dense graphs.
A ADDITIONAL DETAILS ON RELATED WORKS
Deeper GNNs. We mention simpler approaches in the Section 2. More advanced proposals are, for example, based on special features and connections (Chen et al., 2020), community-based normalization of node representations using random clustering (Zhou et al., 2020), boosting techniques (Sun et al., 2021), or differentiable aggregation functions in DeeperGCN (Li et al., 2020a).
Beyond One-Hop Neighbors. Graph diffusion methods (i.e., in graph theory, techniques for spreading information between nodes) are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. (Atwood & Towsley, 2016) use k-hop random walks to aggregate node features and extend the latter by the aggregated ones. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models aggregate node embeddings in every layer, GraphHeat (Xu et al., 2019a) using the heat kernel, PAN (Ma et al., 2020) the transition matrix of maximal entropy random walks, and PinSage (Ying et al., 2018a) using random walks. (Abu-El-Haija et al., 2019) propose to concatenate embeddings aggregated using the transition matrices of k-hop random walks before applying one-hop neighbor aggregation. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020). Recently, (Wang et al., 2020) integrated attention with diffusion-based message propagation.
Position Encodings. Our approach provides a kind of positional embedding (Srinivasan & Ribeiro, 2019) and hence has some commonalities with models extending nodes with positional encodings, e.g., (Li et al.).
B ADDITIONAL THEORETICAL RESULTS: STRUCTURAL LINK REPRESENTATION
Adding structure-related features such as a distance encoding (Li et al., 2020b) has been demonstrated to make graph representation learning more powerful. For link prediction, (Zhang et al., 2020) propose the labeling trick extending distance encoding and making GNNs learn better link representations.
We first recall the definitions from (Zhang et al., 2020) introducing the concept of labeling trick. Consider an undirected graph G as described in Section 3. In addition, the tensor A ∈ Rn×n×k contains all node and edge features (if available). The diagonal components Av,v,: denote the node features, while the off-diagonal components Au,v,: denote the edge features of edge (u, v). The labeling trick uses a target node set S ⊆ V and a labeling function to label all nodes in the node set V and stack the labels with A. A valid labeling trick must meet two conditions: (1) the nodes in S have different labels from the rest of the nodes, (2) the labeling function must be permutation invariant.
Let us recall our method using multiple virtual nodes. Assume we have multiple virtual nodes S = {s1, ..., sm}. ∀u ∈ V , we have the additional features for the node l(u|S) = (h(s1), ..., h(sm))
T(γ(u|s1), ..., γ(u|sm)), where γ(u|si) = 1 if u is connected to the virtual node si, and γ(u|vi) = 0 otherwise. h(si) is the node representation of virtual node si, and is initialized by one-hot vectors so that each virtual node has different labels.
Our labeling strategy is not a valid labeling trick by the definition of (Zhang et al., 2020). First, S is not a subset of V , and we use addition instead of concatenation. Even if we extend V to V ∪ S, our labeling strategy still does not fit the permutation-invariant requirement. Nevertheless, it can achieve similar effects in learning structural link representations. Theorem 1. In any non-attributed graphs with n nodes, if the degree of each node in the graph is between 1 and O(log 1− 2h (n)) for any constant > 0, given m virtual nodes which evenly divide the
node set into m clusters, then there exists ω ( (m− 1)2(n m − 1) 3 )
many pairs of non-isomorphic links (u,w), (v, w), such that an h-layer 1-WL-GNN (see definitions in (Li et al., 2020b) and (Zhang et al., 2020); one well-known example is GIN (Xu et al., 2019b)) gives u, v the same representation, while using m virtual nodes can give u, v different representations.
Proof. The proof can be separated into two steps. The first step is to prove that there exists n/o(n1− ) = ω(n ) many nodes that are locally h-isomorphic. This step is same as the proof of Theorem 2 in (Zhang et al., 2020), so we omit the details here. After getting these locally isomorphic nodes, we denote the set of these nodes as Viso. The second step is to find the non-isomorphic links.
Step 2. Let us partition Viso = ∪i=1Vi where Vi is the subset of nodes connected to virtual node si. To be simple, we call each Vi a cluster, and the sizes of different clusters are assumed to be the same |Vi| = |Viso|/m. Consider two nodes u ∈ Vi and v ∈ Vj from different clusters. Since both of them are in Viso, so they have identical h-hop neighborhood structures, and h-layer 1-WL-GNN will give them the same representations. Then let us select another node w in Vi, h-layer 1-WL-GNN will also make (u,w) and (v, w) have the same representation.
However, if we use virtual nodes to label nodes and give them additional features, because u,w are in the same cluster while v, w belong to different clusters, (u,w) will have different representation from (v, w). Now let us count the number of such non-isomorphic link pairs Y , we can have:
Y ≥ m∏
i,j=1,j 6=i
|Vi||Vi − 1||Vj | = 1
2 m(m− 1) (( |Viso| m − 1 )( |Viso| m )2)
Taking |Viso| = ω(n ) into the above in-equation, we get
Y ≥ 1 2 m(m− 1)ω
( ( n
m − 1)3
) = ω ( (m− 1)2(n
m − 1)3 ) Example (Power of Using Multiple Virtual Nodes). In Figure 4, we show two cases with and without virtual nodes. Consider the nodes v2, v3 with the same local structure, which means they can get the same node representations by using 1-WL-GNN. So we cannot discriminate the links (v1, v2) and (v1, v3) if we just use 1-WL-GNN and concatenate the node representations for link prediction. However, if we add 2 virtual nodes and add extra features to each node. v1 and v2 get a new feature (1, 0), v3 get new feature (0, 1). So it is easy to see (v1, v2) and (v1, v3) now have different representations.
C ADDITIONAL DETAILS ON THE DATA
See Table 5 for the datasets we consider additionally in the appendix.
D MODEL CONFIGURATIONS AND TRAINING
We trained all models for 80 runs using the Bayesian optimization provided by wandb4 and the following hyperparameters.
hidden dimension 32, 64, 128, 256 learning rate 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001 dropout 0, 0.3, 0.6 # of layers 1-7 # of virtual nodes (random) 1-10 # of virtual nodes 1,2,4,8,16,32,64 SGC - K 2-7 APPNP - α 0.05, 0.1, 0.2, 0.3 GNN-GDC - k 64, 128 GNN-GDC - α 0.05, 0.1, 0.2, 0.3
Please note that we considered the wide ranges of values only in order to find a good general setting. For practical usage a hidden dimension of 256, learning rate of 0.0001, and dropout of 0.3 should work well; only on the small graphs a dropout of 0 might work better. As usual, the number of layers depends on the type of data; however, note that the virtual nodes make it possible to use more that then the usual 2-3 layers. Generally, higher numbers of virtual nodes work better, in line with our theoretical results.
Also note that we used less virtual nodes in the selection for the models (-RM, -RMF ) since especially -RMF was very slow and preliminary results showed that larger numbers did not change the results greatly – probably due to the randomness. We used maximally 64 virtual nodes due to memory issues with larger numbers (e.g., 128), especially on the larger datasets. We report the specific numbers of GNN layers and virtual nodes used by the trained models from Tables 2, 8, and 3 in Table 6. For the first clustering in GNN-CM+, we created 150 clusters on cora and pubmed, 200 clusters on ddi and collab, and 1000 on ppa10.
We tuned all models for 80 runs, and thereafter ran the models with the best 3 configurations for 3 runs and chose the best of these model as the final model (configuration).
We trained as suggested by the OGB (e.g., the splits, negative sampling) but used a batch size of 212 and sometimes adapted the number of runs due to lack of resources; we used 3 for the experiments on collab and ppa10 in Table 2. However, we ran several of our models for 10 runs as required for results on the OGB leaderboards and the numbers are comparable (see Table 10).
4https://wandb.ai/site
We used 500 epochs with a patience of 30. Furthermore, for collab, we used the validation edges during testing (OGB contains both settings, with and without them).
E ADDITIONAL EXPERIMENTAL RESULTS
E.1 RESULTS ON ppa
The ppa dataset is challenging in both its size and density. Since we were missing the resources to run experiments for all baselines on this dataset, we compare our best models (trained only on ppa10, we did not do additional hyperparameter tuning) to the OGB leaderboard in Table 7. For GCN, we see that our virtual node approach is able to improve the results considerably – even if only trained on 10% on the data.
E.2 RESULTS ON cora
We ran the models also on the small cora data, yet the results confirm our expectation, that virtual nodes for link prediction should be used in challenging graphs. In contrast, for cora, we get already good scores with a regular GCN. See Table 8.
E.3 RUNTIME
We show the runtimes on ddi in Figure 5. Here we see that a single virtual node can have a positive impact at the same time on both prediction scores and efficiency, while the clustering takes more time.
E.4 COMPARISON OF CLUSTERING ALGORITHMS
See Table 9 and analysis in the main paper.
E.5 ADDITIONAL RUNS FOR collab
Table 10 compares several 10-run averages over collab to the 3-run averages. The numbers are stable.
F CLUSTER ANALYSIS
We computed additional statistics about our “virtual node clusters” (i.e., a cluster represents a set of nodes connected to the same virtual node). Our hypotheses was that our proposed clustering based on
the graph structure better reflects the distribution of test edges than, for example, random clustering. We report the results in Table 11. For the -RMF and -CM+ models we report two numbers. The upper one shows the average number of intra-cluster test edges over 10 runs. The numbers in the lower part distinguish the actual edges and reflect how many different test edges occur in a cluster over the 10 runs. These numbers hence represent lower and upper bounds respectively.
As expected, the numbers for -CM are in between those bounds. For ddi, we see that the -CM+ and -RMF numbers are very similar, while the ones for -CM+ are much better over collab and ppa10.
G INVESTIGATION OF NODE EMBEDDINGS
We also investigated the embeddings of the virtual nodes and compared them to the ones of the regular grapoh nodes, but we could not derive many conclusions. The main finding is that the virtual node embeddings are much more diverse than the mean of the embeddings in corresponding cluster – we would have expected them to be similar.
H DETAILS ABOUT P-GNN
The model closest to our approach is the position-aware graph neural network (P-GNN) (You et al., 2019). It assigns nodes to random subsets of nodes called “anchor-sets”, and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. That is, it creates a message for each node for every anchor-set, instead of for each direct neighbor.
We ran experiments with P-GNN but did not obtain conclusive results. It did not run on the larger datasets. For ddi, we considered the number of anchor nodes as hyperparameter since the fixed choice of 64 from the experiments of (You et al., 2019) did not yield good results. However, larger numbers such as 128 or 512 resulted in very large runtimes (9 hrs / epoch). The result in Table 3 is an intermediate best value after 50 runs. We contacted the authors and they indeed mentioned that the model is not very scalable and suggested to use just the anchor-set distance as additional features, instead of overtaking the adapted message passing as well. We did not do this extra experiment since the SAGE +dist model, whose numbers we report, follows a similar approach. | 1. What is the focus and contribution of the paper regarding virtual nodes in graph learning?
2. What are the strengths of the proposed approach, particularly in terms of its motivation and empirical analysis?
3. What are the weaknesses and concerns of the paper, especially regarding its theoretical limitations and inconsistent results?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any suggestions or recommendations for improving the paper, such as providing a table to explain the effectiveness of virtual nodes in different scenarios? | Summary Of The Paper
Review | Summary Of The Paper
The authors revisited the commonly used trick of virtual nodes in graph learning. The authors proposed the multiple virtual nodes usage under the link prediction scenario and provided both theoretical and empirical supports for it. For theoretical analysis, the authors consider the influence score for m-regular graph and expressiveness of link representation (by concatenating representation of two nodes) in a special case and non-attributed graphs. For empirical analysis, the authors compare the performance of multiple virtual nodes setting to only one node setting with different GNN strategies and different datasets. They finally conclude that the virtual nodes can stably improve base GNN performance on some challenging link prediction tasks.
Review
Strength: 1. By the authors’ claim, this is the first work to employ virtual nodes to improve the link prediction tasks. 2. Virtual nodes are well motivated to capture long-distance / under-reaching messages between nodes. The authors provide both theoretical and empirical analysis for the virtual node setting. Weakness/concerns: 1. The theoretical analysis is limited to regular graph for influence score and non-attribute graph for expressiveness of link representation. Could them be generalized to more applicable graphs? 2. The authors concatenate node representation as link representation. In this way, the expressiveness of link representation is highly related to the expressiveness of node representation. Therefore, it seems powerful GNNs for node representation or node classification can be directly used for link representation or link prediction. But it seems that P-GNN conflicts with this claim as the performance of P-GNN is really bad though the authors mention some concerns about it in supplements. 3. As the experimental results show, virtual nodes do not always benefit the link prediction, such as on Cora and Pubmed. Although the authors give some analysis, readers may still be confused about in what situations virtual nodes are recommended and vice versa. I would appreciate if the authors can give a table to further explain on it, especially clarify ambiguous expression in the article. For example, what “cora/pubmed have no cluster structure” means, when both Cora and Pubmed have clearly defined classes and previous works have shown that their data points have underlying clusters. 4. The proposed method seems to rely heavily on sophisticated hyperparameter searching. (as shown in Appendix D) |
ICLR | Title
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
Abstract
It is well known that the graph classification performance of graph neural networks often improves by adding an artificial virtual node to the graphs, which is connected to all nodes in the graph. Intuitively, the virtual node provides a shortcut for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes with other problems is still an open research question. In this paper, we adapt the concept of virtual nodes to the link prediction scenario, where we usually have much larger, often dense, and more heterogeneous graphs. In particular, we use multiple virtual nodes per graph and graph-based clustering to determine the connections to the graph nodes. We also investigate alternative clustering approaches (e.g., random or more advanced) and compare to the original model with a single virtual node. We conducted extensive experiments over different datasets of the Open Graph Benchmark (OGB) and analyze the results in detail. We show that our virtual node extensions yield rather stable performance increases and allow standard graph neural networks to compete with complex state-of-the-art models, as well as with the models leading the OGB leaderboards.
1 INTRODUCTION
Link prediction is an important task to complete graphs that are missing edges in various domains: citation networks (Kipf & Welling, 2016), social networks (Adamic & Adar, 2003), medical drug interaction graphs (Abbas et al., 2021), or knowledge graphs (KGs) (Ji et al., 2021). Numerous kinds of models have been proposed to solve the link prediction problem, ranging from KG-specific predictors (Ji et al., 2021) to graph neural networks (GNNs) (Kipf & Welling, 2016; Zhang & Chen, 2018). Over dense biomedical networks, GNNs turned out to work especially well (Hu et al., 2020).
In this work, we focus on graph neural networks for link prediction. Many of the popular GNNs are based on the message-passing scheme, which computes node embeddings based on iteratively aggregating the features of (usually direct/one-hop) neighbor nodes along the graph edges (Gilmer et al., 2017). Interestingly, best performance is usually obtained by only considering two to three hops of neighbors (i.e., 2-3 layers in the GNN). One main reason identified for this is over-smoothing, the problem that node representations become indistinguishable when the number of layers increases (Li et al., 2018). The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies (Alon & Yahav, 2021). While it is likely that link prediction most often depends on the local node neighborhood, it is not beyond imagination that there are critical long-range dependencies (e.g., complex chains of drug-drug or drug-protein interactions). Hence, using a small number of layers to overcome the above problems results in under-reaching.
There have been several recent proposals to overcome under-reaching. On the one hand, several works propose techniques that allow for larger numbers of GNN layers (Xu et al., 2018; Wu et al., 2019; Liu et al., 2020; Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the link prediction experiments in these works consider citation or recommendation networks, but not the especially dense biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data. On the other hand, there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood: based on graph diffusion (Atwood & Towsley, 2016; Klicpera et al., 2019a; Abu-El-Haija et al., 2019; Xu et al., 2019a; Ma et al., 2020; Klicpera et al., 2019b) and other theories (Morris et al., 2019; You
et al., 2019). However, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) (Hu et al., 2020), several ran out of memory. Moreover, the majority has not considered link prediction, while this problem was recently shown to be more difficult than node classification (Zhang et al., 2020).
In this paper, we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Virtual nodes are well known to often improve the graph classification performance of graph neural networks, where an artificial virtual node is added to every graph and connected to all nodes in the graph. While the virtual nodes were originally thought as representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes for the link prediction problem has not been investigated yet. The reason for this might be that the often very large and heterogeneous “network” graphs in link prediction are of very different nature and require novel/adapted solutions (e.g., a protein interaction network may easily contain millions of nodes, whereas a molecule to be classified contains usually less than fifty).
We explore application and effects of virtual nodes in link prediction theoretically and empirically:
• We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes. Consider Figure 1. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node. In this way, under-reaching is decreased because clustered nodes can share information easily; at the same time, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model). • We also investigate alternative methods to determine the virtual node connections (e.g., randomization in clustering) and compare to the original model with a single virtual node. • We theoretically investigate the benefit of using (multiple) virtual nodes in terms of two aspects: influence score and the expressiveness in learning a structural link representation. • We conducted extensive experiments over challenging datasets of different type, provide ablation studies that confirm the superiority of our proposed techniques, analyze the results in detail, and provide first guidelines about how to use virtual nodes with different types of data and GNNs. • Most importantly, we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing, as well as with the models leading the OGB leaderboards.
2 RELATED WORK
We give an overview on approaches that are similar from a technical perspective; for a more detailed summary, see Appendix A. For a more general overview of the large and diverse field of link prediction, we refer to good summaries in recent works (Martínez et al., 2016; Zhang et al., 2020).
Deeper GNNs. Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching. These models range from the simple but efficient message propagation in SGC (Wu et al., 2019; Liu et al., 2020) and APPNP (Klicpera et al., 2019a) and connections in JKNet (Xu et al., 2018), to more advanced proposals (Chen et al., 2020; Sun et al., 2021; Zhou et al., 2020; Li et al., 2020a) such as the differentiable aggregation functions in DeeperGCN (Li et al., 2020a). However, although (Chen et al., 2020) show that over-smoothing happens particularly in dense graphs, the experiments in most of these works consider citation or recommendation networks, but not the especially dense and important biomedical ones. And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data.
Beyond One-Hop Neighbors. Recently, graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. Atwood & Towsley (2016) use k-hop random walks to extend the node features. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models concatenate (Abu-El-Haija et al., 2019) or aggregate (Xu et al., 2019a; Ma et al., 2020) node embeddings in every layer using a diffusion-based transition matrix. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020) and attention (Wang et al., 2020). Morris et al. (2019) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm. All the above approaches are relatively complex, many terminated with memory errors in our experiments, and few have been evaluated for link prediction.
Virtual Nodes. To the best of our knowledge, virtual nodes have only been considered in the context of graph classification so far, where a single virtual node (also called supernode) is added to the graph to be classified and connected to all graph nodes (Gilmer et al., 2017; Li et al., 2017; Pham et al., 2017; Ishiguro et al., 2019). Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction (i.e, via edges from the graph nodes) instead of bidirectionally (Li et al., 2017).
There are some GNNs which point out special nodes that we could consider as “virtual”. Fey et al. (2020) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based, molecule-specific algorithm and then applies message passing within and between these clusters. The graph-partition based message passing from Liao et al. (2018) also used clustering, but it just divides the original messages into inter- and intra-cluster. Our approach creates new “paths” in the graph and we theoretically demonstrate its expressiveness. P-GNN (You et al., 2019) assigns nodes to random clusters (“anchor-sets”) and then creates a message for each node for every anchor-set, while ignoring the message passing from original direct neighbors. Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors. We also explore the idea of similar random assignments in our context, but show that more elaborate techniques generally work better. Most importantly, we do not propose a specific, new GNN but a new technique for augmenting existing graph neural networks.
Although it is a well-known trick, the advantage of using virtual nodes has never been theoretically investigated nor fully understood. We focus on link prediction and considerably extend the virtual node technique. There are commonalities in the advantages of using virtual nodes for graph classification and link prediction, but their role in link prediction is to improve the representation of the link instead of the graph (nodes). We analyze theoretically and empirically how they improve GNN performance.
3 PRELIMINARIES
Link Prediction. We consider an undirected graphG = (V,E) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation. All our techniques work for directed graphs and, with simple adaptation, also for graphs with labelled edges. We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors. Given two nodes, the link prediction task is to predict whether there is a link between them.
Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described by Gilmer et al. (2017). These networks compute for every v ∈ V a node representation h`v at layer ` ∈ [1, 2, . . . , k], by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h`−1v as below; h 0 v are the initial node features.
h`v = COMBINE ` ( h`−1v , AGGREGATE ` ( {h`−1u | u ∈ Nv} )) (1)
Link prediction with GNNs is usually done by combining (e.g., concatenating) the final representations hLu , h L v , of the nodes u, v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. We follow this approach.
We further use [1, n] to denote an interval [1,2,. . . , n].
4 VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION
So far, virtual nodes have been only used for graph classification. Link prediction scenarios are different in that the graphs are usually very large, heterogeneous, sometimes dense, and the task is to predict a relationship that might strongly be influenced depending on surrounding relations. In the following, we propose approaches that fit these scenarios.
4.1 MULTIPLE VIRTUAL NODES
Our main goal of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes S = {s1, s2 . . . , sn}1 each being connected to a subset of graph nodes, as determined by an assignment σ : V → [1, n]; n is considered as hyperparameter. We propose different methods to obtain σ:
Random (GNN-RM). Most simple, we can determine a fixed σ randomly once with initialization.
Increased Randomness (GNN-RMF ). Similarly a random assignment, but initialized with every forward pass. In this way, a single bad assignment does not determine the overall performance.
Clustering (GNN-CM). Many types of graph data incorporate a certain cluster structure (e.g., collaboration or social networks) that reflects which nodes belong closely together. We propose to connect such nodes in a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment σ. More precisely, during initialization, we use a generic clustering algorithm which, given a number m, creates a set C = {C1, C2 . . . , Cm} of clusters (i.e., sets of graph nodes) by computing an assignment ρ : V → [1,m], assigning each graph node to a cluster. We then obtain σ by choosing m = n and σ = ρ.
In this work, we decided for the METIS clustering (Karypis & Kumar, 1998) which turned out to provide a good trade off between quality and efficiency. Nevertheless, our idea is generic and can be applied with arbitrary algorithms. We will show ablation experiments for alternatives (e.g., Graclus (Dhillon et al., 2007) and Diffpool (Ying et al., 2018b)).
Advanced Clustering (GNN-CM+). Not every type of graph data contains an inherent cluster structure or one that is sufficiently expressed. Furthermore, using a fixed clustering, we obtain a deterministic algorithm again taking the risk that we completely rely on a single, possibly not ideal, virtual node assignment – there may be critical long range dependencies that go beyond clusters. For these cases, we propose an alternative approach, which breaks up the determinism by extending the above clustering as follows. We choose a relatively large m, with m n, and apply the above clustering algorithm, during initialization. Then, in each epoch, we randomly guess an assignment σ′ : [1,m]→ [1, n] of clusters to virtual nodes and define σ(n) := σ′(ρ(n)). Note that this approach is inspired by Chiang et al. (2019), who apply a similar technique is to create batches based on clusters. Further note that we determine σ′ with every epoch instead of every forward pass since the the computation takes quite some time on large datasets and we observed that this yields good results.
4.2 THE MODEL
We integrate the multiple virtual nodes into a generic message-passing graph neural network by extending the approach from Hu et al. (2020) to the setting with multiple virtual nodes, by computing node representations h`v for a node v ∈ V at layer ` as follows:
h`si = COMBINE ` VN ( h`−1si , AGGREGATE ` VN ( {h`−1u | u ∈ V, σ(u) = i} )) (2)
h`v = COMBINE ` ( h`−1v +h ` sσ(v) , AGGREGATE` ( {h`−1u | u ∈ Nv} )) (3)
Note that the highlighted adaptation of the standard GNN processing from Equation (1) is only minor – but powerful. In our implementation, COMBINE`VN is addition combined with linear layers and layer normalization, and we use sum for AGGREGATE`VN .
1Since notation V is standard for nodes, we use S for the set of virtual nodes. Think of “supernodes”.
4.3 ANALYSIS: VIRTUAL NODES CHANGE INFLUENCE
Influence Score. Following (Xu et al., 2018; Klicpera et al., 2019a), we measure the sensitivity (also, influence) of a node x to a node y by the influence score I(x, y) = eT ∂h k x
∂h0y ; e is a vector of all ones,
hkx is the embedding of x at the k th layer, see Equations (1) and (3). For a k-layer GNN, the influence score is known to be proportional in expectation to the k-step random walk distribution from x to y:2
E[I(x, y)] ∝ Prw(x→ y, k) = ∑ r∈Rk k∏ `=1
1
deg(v`r) , (4)
(v0r , v 1 r , ..., v k r ) are the nodes in the path r from x := v 0 r to y := v k r , R k is the set of paths of length k. In what follows, we will exploit this relationship and argument in terms of the probability Prw.
Virtual Nodes. For simplicity, consider the influence score in an m-regular graph; there we have Prw(x→ y, k) = |R k| mk
. We hypothesize that we can come to similar conclusions in a general graph with average degree m. Consider the message passing between two distant nodes x and y. (I) In case the shortest path from x to y is of length > k, a k-layer GNN cannot capture it, and the probability Prw(x→ y, k) is obviously zero. If we then consider virtual nodes in the GNN layer (even with only one), we can pass messages from x to y through the virtual nodes and obtain a nonzero probability. (II) Consider the case where there is a shortest path of length ≤ k between x and y. By adding a virtual node s in one GNN layer, the probability changes to:
P srw(x→ y, k) = Prw(x→ y, k) + Prw(x→ s, s→ y) = |Rk|
(m+ 1)k +
1
(m+ 1)|V | . (5)
Compared to the original probability, we get the following impact ratio for using virtual nodes:
ir = mk
(m+ 1)k +
mk
(m+ 1)|V ||Rk| . (6)
Whenm is large enough, ir can be approximated by ir ' ( 1 + m k−1
|V ||Rk|
) . Here, we see that the impact
of virtual nodes grows when m increases. Our experiments confirm this theoretical observation.
Multiple Virtual Nodes. In view of multiple virtual nodes, the above analysis gets even more appealing. We continue along these lines and assume there is a shortest path of length ≤ k between x and y. If x and y connect to the same virtual node s, then Equation (5) changes as follows:
P srw(x→ y, k) = |Rk|
(m+ 1)k +
1
(m+ 1)|Cs| . (7)
Since the set Cs of nodes connecting to s is much smaller than V , the impact of multiple virtual nodes is greater than that of a single virtual node. On the other hand, if x and y do not connect to the same virtual node, the probability just slightly decreases from |R
k| mk to |R k| (m+1)k .
In Appendix B, we further show that using multiple virtual nodes is related to (but not equal to) the labeling trick (Zhang et al., 2020) and distance encoding (Li et al., 2020b), and it can theoretically improve the expressiveness in learning structural link representations (see Theorem 1 and Figure 4(b)).
5 EVALUATION
We conducted extensive experiments and ablation studies to empirically investigate:
• How does the existing approach with one virtual node perform in link prediction? • Do multiple virtual nodes improve performance, how do our proposed approaches compare? • In particular, are approaches based on the graph structure better? • How exactly do virtual nodes support link prediction? When do they help particularly?
2See Theorem 1 in (Xu et al., 2018). Note that the theorem makes some simplifying assumptions (e.g., on the shape of GNN).
types/amounts; (second) best results are (light) gray , overall best bold, second best underlined.
ddi ppa10 collab pubmed Hits@20 Hits@100 Hits@50 Hits@20
GCN 0.4076 ± 0.1073 0.1313 ± 0.0084 0.4955 ± 0.0064 0.9675 ± 0.0143 - VN 0.6217 ± 0.1241 0.1258 ± 0.0082 0.5049 ± 0.0088 0.9579 ± 0.0214 - RM 0.5532 ± 0.1262 0.1205 ± 0.0059 0.5083 ± 0.0109 0.9522 ± 0.0110 - RMF 0.5830 ± 0.0855 0.1116 ± 0.0094 0.5046 ± 0.0049 0.8100 ± 0.0781 - CM 0.6105 ± 0.1563 0.1299 ± 0.0050 0.5181 ± 0.0076 0.9575 ± 0.0230 - CM+ 0.6033 ± 0.1759 0.1399 ± 0.0071 0.5128 ± 0.0129 0.9189 ± 0.0514 SAGE 0.6173 ± 0.1068 0.1024 ± 0.0050 0.5662 ± 0.0149 0.9779 ± 0.0105 - VN 0.6491 ± 0.1360 0.0853 ± 0.0154 0.5875 ± 0.0091 0.9659 ± 0.0333 - RM 0.7068 ± 0.1174 0.1131 ± 0.0039 0.5830 ± 0.0087 0.9433 ± 0.0208 - RMF 0.7564 ± 0.1055 0.1105 ± 0.0023 0.6067 ± 0.0063 0.9800 ± 0.0087 - CM 0.7621 ± 0.1157 0.1077 ± 0.0150 0.6056 ± 0.0105 0.9834 ± 0.0068 - CM+ 0.8251 ± 0.0678 0.0963 ± 0.0099 0.5940 ± 0.0262 0.9754 ± 0.0139 GIN 0.4321 ± 0.1353 0.1139 ± 0.0058 0.5768 ± 0.0179 0.9234 ± 0.0166 - VN 0.5260 ± 0.1227 0.1316 ± 0.0049 0.5863 ± 0.0254 0.9790 ± 0.0070 - RM 0.5084 ± 0.1324 0.1337 ± 0.0045 0.5412 ± 0.0174 0.9604 ± 0.0158 - RMF 0.5310 ± 0.1453 0.1269 ± 0.0026 0.5335 ± 0.0087 0.7986 ± 0.0993 - CM 0.5664 ± 0.0860 0.1349 ± 0.0034 0.5821 ± 0.0081 0.9125 ± 0.0378 - CM+ 0.4339 ± 0.1855 0.1591 ± 0.0069 0.5557 ± 0.0026 0.9037 ± 0.0262
Datasets. We focused on challenging data from the OGB: ddi, a drug-drug interaction network; ppa10, a subset of the protein-protein association network ppa containing only 10% of the train edges (but full valid/test); and collab, an author collaboration network. To learn more about smaller data of similar type, we also tested on the citation networks pubmed (Yang et al., 2016). Since the datasets are not only very different in type but also in various other critical graph parameters and this is reflected in the performance of the models, we show relevant statistics in Table 1.3 The datasets vary strongly in size with ddi being smallest among the biomedical; on the other hand, ddi is very dense. The clustering coefficient intuitively reflects the “cliquishness” of the graph’s subgraphs. The large diameters suggest that the data suits testing under-reaching. Appendix C gives further details and describes datsets we consider in additional experiments in the appendix.
Baselines For a competitive comparison, we considered important baselines (described in Section 2):
• The deep GNNs SGC, APPNP, DeeperGCN, and two variants of JKNet. • Approaches extending message passing: P-GNN, APPNP, GCN-GDC, SAGE-GDC, GIN-GDC. • The popular GNNs GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), and GIN (Xu
et al., 2019b), which we then extend with (multiple) virtual nodes.
3See Tables 2 and 3 in (Hu et al., 2020). We computed the numbers for ppa10 (which we focus on due to a lack of resources), and pubmed using the same techniques.
5.1 RESULTS
Overall Impact of Virtual Nodes, Tables 2, 7, 8 (Appendix). We compare to GCN, SAGE, and GIN. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi, slight improvements over collab, but no definitive ones over ppa10; over pubmed, it works very well for GIN. The numbers for GNN-RM and GNN-RMF reflect the randomness of their connections to the virtual nodes, there is no clear trend. Nevertheless, they clearly outperform the original models, with only few exceptions. The increased randomness by re-assigning the virtual nodes with every forward pass (GNN-RMF ) seemingly suits SAGE but not the others. As expected, over the small pubmed/cora, which also have no cluster structure, the results are not consistent or convincing overall; virtual nodes only yield improvement sometimes, and none for GCN. Yet, on the more challenging datasets, multiple virtual nodes turn out to be an efficient means to boost the link prediction performance of GNNs if they are applied correctly. Our virtual node connections based on the graph structure (GNN-CM) yield consistently good improvements over ddi and collab, and mostly help on the challenging ppa10 dataset. On collab, we did further experiments using GAT (Veličković et al., 2017) and also observe a clear performance gain: 0.4745 vs. 0.5876 (GAT-CM). GNN-CM and GNN-CM+ are not always the best ones, but yield reliably good results, in contrast to the other models with virtual nodes (see variability of gray shades). Interestingly, the advanced clustering yields especially good performance over ppa10/ppa, while its results on the other datasets are not convincing. Generally, the improvements of the virtual node models are strongest on ddi. For an in-depth result analysis see Section 5.2.
Comparison to Related Works and SOTA, Table 3. Most deep GNNs as well as the models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. However, most of the original evaluations focus on node or graph classification and consider very different types of data – often the standard citation networks (Lu & Getoor, 2003) – and, in fact, on collab we see the best numbers. For a more detailed discussion of P-GNN see Appendix H. Even if we assume that these numbers can be improved, the models do not seem apt for link prediction; in particular, the complex ones: many do not run at all on realistic link prediction data but yield memory errors. Further, our virtual node extensions make standard GNNs competitive to the models on the leaderboard. In particular, their performance is much more stable. The results of the best models from the leaderboard vary strongly with the different datasets, or have not been reported at all. None of these models can be called “good” overall, given the numbers in the - sometimes even missing - rest of the table; in fact, SEAL and Adamic Adar perform rather bad on the very dense ddi.
Impact of Virtual Nodes on Number of GNN Layers and Efficiency, Figure 2. For the virtual nodes models, the scores increase with the number of layers for a longer time, GCN drops earlier. On ddi, GCN-VN and -CM reach their best scores at 6 and 8 layers, respectively, which is remarkable for that very dense dataset, which is prone to over-smoothing. On collab it is the other way around. The figure also gives an idea about the runtime increase with using virtual nodes. It compares the 6-layer models, and shows the 4-layer GCN-CM which obtains performance similar to the 6-layer GCN-VN.
Impact of Virtual Node Number, Figure 3. First, consider the configurations of the best models for the overall results in Table 2, which are provided in Table 6 in the appendix. Here, we see that the chosen numbers of virtual nodes are indeed random for the “random” models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis in Section 4.3. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes. Note that there is a trade off between number of virtual nodes and intra-cluster test edges, discussed in Section 5.2.
Using Virtual Nodes Only at the Last GNN Layer, Table 4. Alon & Yahav (2021) show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we see that this can lead to extreme performance drops.
Impact of Clustering Algorithm, Table 9 (Appendix). Our architecture is generic in the clustering algorithm, and we investigated the effects of varying that. Graclus is similar in nature to METIS in that it also creates partitions based on the adjacency matrix, but it took much longer to run. Diffpool considers the node features and yields improvements for GCN, but does not scale to larger datasets. Over ddi, there is no clear winner and, given its efficiency, METIS turns out to be a good solution.
5.2 DISCUSSION AND CONCLUSIONS
The results show that our approach with multiple virtual nodes based on graph-based clustering yields performance increases for various GNNs and types of data, but there are clear differences.
Dense Graphs with Medium/High Clustering Coefficient. Over ddi, we see strongest improvements for all virtual-node models. This can be explained by our proposed theory, showing that a very large node degree m increases the impact of the virtual node(s), and thus decreases the negative impact of the (too) many other neighbors (see Equation (6)). Furthermore, the empirical results confirm our proposed theory regarding multiple virtual nodes (see Equation (7)). We see particularly good numbers for GNN-CM, which exploits the clustering inherent in the given graph. GNN-CM+, which considers this given clustering only on a lower level, is shown to perform worse than GNN-CM overall. In fact, we computed the percentage of test edges that occur in the “virtual node cluster” (see Table 11 in the appendix) and it shows that the numbers for the advanced clustering are very similar to the random one, meaning the randomly merged smaller clusters break the data’s structure too much. Interestingly, the experiments show that, even with the dense data that is prone to over-smoothing, virtual nodes make the GNNs score higher with more than the standard 2-3 layers; hence virtual nodes seem to alleviate over-smoothing to some extent, an interesting question for future work.
Graphs with Large Problem Radius and Low Clustering Coefficient. Over ppa10, all GNNs use an unusually high number of layers, which hints at a large problem radius (e.g., GCN, which performs especially well, uses 7 layers). Given the very low clustering of the data in addition, ppa10 represents a special challenge. With the multiple virtual nodes, GNN-CM performs again better than GNN-VN. On the other hand, it does not perform much better than the random models on data without cluster structure. This can be explained by its choice of number of virtual nodes, which is consistently high, but then there are less test edges within a virtual node cluster (see appendix Table 11). We hence see here that the positive effect of having many virtual nodes (recall Equation (7)) cancel out the benefits of clustering. Our advanced clustering, which merges some local clustering with randomness, is able to achieve best results with GCN and GIN (with SAGE, all models perform rather bad over ppa10). This can be explained by the fact that it randomly merges some local clusters – with each epoch anew – and hence allows more messages to pass across “virtual node clusters”. We also did some experiments over the very large ppa, which is denser than ppa10, and see a similar trend.
Sparse Graphs with Low to High Clustering Coefficient. We tested on three citation/collaboration networks of different sizes. Note that, over this data, the problem radius is usually assumed to be rather small (Alon & Yahav, 2021), although the graph diameters are large. We investigated virtual nodes to augment link prediction in large and complex graphs; but we also want to provide insight into the behavior on smaller data. Over pubmed (similarly on cora as shown in the appendix), virtual nodes do not provide any improvement for GCN. For GIN, a single virtual node yields good increases; overall, it usually outperforms the settings with multiple virtual nodes. We hypothesize that this is mainly due to the small graph size and sparsity. In fact, on the larger and denser collab, GNN-CM performs very good for all GNNs. The trends in the models’ performance and the corresponding explanations are similar to those for ddi but much less expressed, probably due to the much smaller node degrees. Yet, the performance is much more stable, possibly because it is larger and not as dense.
Conclusions. We summarize our main findings to provide first guidelines for applying virtual nodes:
• Small + Sparse Graphs: A single virtual node is likely to boost performance of GIN, and virtual nodes should help with SAGE, but probably not with GCN. • Large + Sparse Graphs: If there is cluster structure, GNN-CM should yield stable performance increases. If the problem radius is large or there is few cluster structure, GNN-CM+ is worth a try. • Dense Graphs + Clustering: Multiple virtual nodes (i.e., GNN-CM) likely increase performance.
6 CONCLUSIONS
We propose a simple but elegant graph neural network extension using multiple virtual nodes that may considerably increase link prediction performance. We also advance research by providing theoretical justifications - the very first about applying virtual nodes at all - and by showing their positive impact in various experiments. Future work includes the design of more advanced and scalable architectures, and it would be interesting to further investigate the huge performance increases on dense graphs.
A ADDITIONAL DETAILS ON RELATED WORKS
Deeper GNNs. We mention simpler approaches in the Section 2. More advanced proposals are, for example, based on special features and connections (Chen et al., 2020), community-based normalization of node representations using random clustering (Zhou et al., 2020), boosting techniques (Sun et al., 2021), or differentiable aggregation functions in DeeperGCN (Li et al., 2020a).
Beyond One-Hop Neighbors. Graph diffusion methods (i.e., in graph theory, techniques for spreading information between nodes) are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood. (Atwood & Towsley, 2016) use k-hop random walks to aggregate node features and extend the latter by the aggregated ones. APPNP (Klicpera et al., 2019a) applies personalized PageRank to propagate the node predictions generated by a neural network. Other models aggregate node embeddings in every layer, GraphHeat (Xu et al., 2019a) using the heat kernel, PAN (Ma et al., 2020) the transition matrix of maximal entropy random walks, and PinSage (Ying et al., 2018a) using random walks. (Abu-El-Haija et al., 2019) propose to concatenate embeddings aggregated using the transition matrices of k-hop random walks before applying one-hop neighbor aggregation. The diffusion-based graph neural network (GDC) (Klicpera et al., 2019b) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion. Subsequent works use diffusion methods on multiple scales (Liao et al., 2019; Luan et al., 2019; Xhonneux et al., 2020). Recently, (Wang et al., 2020) integrated attention with diffusion-based message propagation.
Position Encodings. Our approach provides a kind of positional embedding (Srinivasan & Ribeiro, 2019) and hence has some commonalities with models extending nodes with positional encodings, e.g., (Li et al.).
B ADDITIONAL THEORETICAL RESULTS: STRUCTURAL LINK REPRESENTATION
Adding structure-related features such as a distance encoding (Li et al., 2020b) has been demonstrated to make graph representation learning more powerful. For link prediction, (Zhang et al., 2020) propose the labeling trick extending distance encoding and making GNNs learn better link representations.
We first recall the definitions from (Zhang et al., 2020) introducing the concept of labeling trick. Consider an undirected graph G as described in Section 3. In addition, the tensor A ∈ Rn×n×k contains all node and edge features (if available). The diagonal components Av,v,: denote the node features, while the off-diagonal components Au,v,: denote the edge features of edge (u, v). The labeling trick uses a target node set S ⊆ V and a labeling function to label all nodes in the node set V and stack the labels with A. A valid labeling trick must meet two conditions: (1) the nodes in S have different labels from the rest of the nodes, (2) the labeling function must be permutation invariant.
Let us recall our method using multiple virtual nodes. Assume we have multiple virtual nodes S = {s1, ..., sm}. ∀u ∈ V , we have the additional features for the node l(u|S) = (h(s1), ..., h(sm))
T(γ(u|s1), ..., γ(u|sm)), where γ(u|si) = 1 if u is connected to the virtual node si, and γ(u|vi) = 0 otherwise. h(si) is the node representation of virtual node si, and is initialized by one-hot vectors so that each virtual node has different labels.
Our labeling strategy is not a valid labeling trick by the definition of (Zhang et al., 2020). First, S is not a subset of V , and we use addition instead of concatenation. Even if we extend V to V ∪ S, our labeling strategy still does not fit the permutation-invariant requirement. Nevertheless, it can achieve similar effects in learning structural link representations. Theorem 1. In any non-attributed graphs with n nodes, if the degree of each node in the graph is between 1 and O(log 1− 2h (n)) for any constant > 0, given m virtual nodes which evenly divide the
node set into m clusters, then there exists ω ( (m− 1)2(n m − 1) 3 )
many pairs of non-isomorphic links (u,w), (v, w), such that an h-layer 1-WL-GNN (see definitions in (Li et al., 2020b) and (Zhang et al., 2020); one well-known example is GIN (Xu et al., 2019b)) gives u, v the same representation, while using m virtual nodes can give u, v different representations.
Proof. The proof can be separated into two steps. The first step is to prove that there exists n/o(n1− ) = ω(n ) many nodes that are locally h-isomorphic. This step is same as the proof of Theorem 2 in (Zhang et al., 2020), so we omit the details here. After getting these locally isomorphic nodes, we denote the set of these nodes as Viso. The second step is to find the non-isomorphic links.
Step 2. Let us partition Viso = ∪i=1Vi where Vi is the subset of nodes connected to virtual node si. To be simple, we call each Vi a cluster, and the sizes of different clusters are assumed to be the same |Vi| = |Viso|/m. Consider two nodes u ∈ Vi and v ∈ Vj from different clusters. Since both of them are in Viso, so they have identical h-hop neighborhood structures, and h-layer 1-WL-GNN will give them the same representations. Then let us select another node w in Vi, h-layer 1-WL-GNN will also make (u,w) and (v, w) have the same representation.
However, if we use virtual nodes to label nodes and give them additional features, because u,w are in the same cluster while v, w belong to different clusters, (u,w) will have different representation from (v, w). Now let us count the number of such non-isomorphic link pairs Y , we can have:
Y ≥ m∏
i,j=1,j 6=i
|Vi||Vi − 1||Vj | = 1
2 m(m− 1) (( |Viso| m − 1 )( |Viso| m )2)
Taking |Viso| = ω(n ) into the above in-equation, we get
Y ≥ 1 2 m(m− 1)ω
( ( n
m − 1)3
) = ω ( (m− 1)2(n
m − 1)3 ) Example (Power of Using Multiple Virtual Nodes). In Figure 4, we show two cases with and without virtual nodes. Consider the nodes v2, v3 with the same local structure, which means they can get the same node representations by using 1-WL-GNN. So we cannot discriminate the links (v1, v2) and (v1, v3) if we just use 1-WL-GNN and concatenate the node representations for link prediction. However, if we add 2 virtual nodes and add extra features to each node. v1 and v2 get a new feature (1, 0), v3 get new feature (0, 1). So it is easy to see (v1, v2) and (v1, v3) now have different representations.
C ADDITIONAL DETAILS ON THE DATA
See Table 5 for the datasets we consider additionally in the appendix.
D MODEL CONFIGURATIONS AND TRAINING
We trained all models for 80 runs using the Bayesian optimization provided by wandb4 and the following hyperparameters.
hidden dimension 32, 64, 128, 256 learning rate 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001 dropout 0, 0.3, 0.6 # of layers 1-7 # of virtual nodes (random) 1-10 # of virtual nodes 1,2,4,8,16,32,64 SGC - K 2-7 APPNP - α 0.05, 0.1, 0.2, 0.3 GNN-GDC - k 64, 128 GNN-GDC - α 0.05, 0.1, 0.2, 0.3
Please note that we considered the wide ranges of values only in order to find a good general setting. For practical usage a hidden dimension of 256, learning rate of 0.0001, and dropout of 0.3 should work well; only on the small graphs a dropout of 0 might work better. As usual, the number of layers depends on the type of data; however, note that the virtual nodes make it possible to use more that then the usual 2-3 layers. Generally, higher numbers of virtual nodes work better, in line with our theoretical results.
Also note that we used less virtual nodes in the selection for the models (-RM, -RMF ) since especially -RMF was very slow and preliminary results showed that larger numbers did not change the results greatly – probably due to the randomness. We used maximally 64 virtual nodes due to memory issues with larger numbers (e.g., 128), especially on the larger datasets. We report the specific numbers of GNN layers and virtual nodes used by the trained models from Tables 2, 8, and 3 in Table 6. For the first clustering in GNN-CM+, we created 150 clusters on cora and pubmed, 200 clusters on ddi and collab, and 1000 on ppa10.
We tuned all models for 80 runs, and thereafter ran the models with the best 3 configurations for 3 runs and chose the best of these model as the final model (configuration).
We trained as suggested by the OGB (e.g., the splits, negative sampling) but used a batch size of 212 and sometimes adapted the number of runs due to lack of resources; we used 3 for the experiments on collab and ppa10 in Table 2. However, we ran several of our models for 10 runs as required for results on the OGB leaderboards and the numbers are comparable (see Table 10).
4https://wandb.ai/site
We used 500 epochs with a patience of 30. Furthermore, for collab, we used the validation edges during testing (OGB contains both settings, with and without them).
E ADDITIONAL EXPERIMENTAL RESULTS
E.1 RESULTS ON ppa
The ppa dataset is challenging in both its size and density. Since we were missing the resources to run experiments for all baselines on this dataset, we compare our best models (trained only on ppa10, we did not do additional hyperparameter tuning) to the OGB leaderboard in Table 7. For GCN, we see that our virtual node approach is able to improve the results considerably – even if only trained on 10% on the data.
E.2 RESULTS ON cora
We ran the models also on the small cora data, yet the results confirm our expectation, that virtual nodes for link prediction should be used in challenging graphs. In contrast, for cora, we get already good scores with a regular GCN. See Table 8.
E.3 RUNTIME
We show the runtimes on ddi in Figure 5. Here we see that a single virtual node can have a positive impact at the same time on both prediction scores and efficiency, while the clustering takes more time.
E.4 COMPARISON OF CLUSTERING ALGORITHMS
See Table 9 and analysis in the main paper.
E.5 ADDITIONAL RUNS FOR collab
Table 10 compares several 10-run averages over collab to the 3-run averages. The numbers are stable.
F CLUSTER ANALYSIS
We computed additional statistics about our “virtual node clusters” (i.e., a cluster represents a set of nodes connected to the same virtual node). Our hypotheses was that our proposed clustering based on
the graph structure better reflects the distribution of test edges than, for example, random clustering. We report the results in Table 11. For the -RMF and -CM+ models we report two numbers. The upper one shows the average number of intra-cluster test edges over 10 runs. The numbers in the lower part distinguish the actual edges and reflect how many different test edges occur in a cluster over the 10 runs. These numbers hence represent lower and upper bounds respectively.
As expected, the numbers for -CM are in between those bounds. For ddi, we see that the -CM+ and -RMF numbers are very similar, while the ones for -CM+ are much better over collab and ppa10.
G INVESTIGATION OF NODE EMBEDDINGS
We also investigated the embeddings of the virtual nodes and compared them to the ones of the regular grapoh nodes, but we could not derive many conclusions. The main finding is that the virtual node embeddings are much more diverse than the mean of the embeddings in corresponding cluster – we would have expected them to be similar.
H DETAILS ABOUT P-GNN
The model closest to our approach is the position-aware graph neural network (P-GNN) (You et al., 2019). It assigns nodes to random subsets of nodes called “anchor-sets”, and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. That is, it creates a message for each node for every anchor-set, instead of for each direct neighbor.
We ran experiments with P-GNN but did not obtain conclusive results. It did not run on the larger datasets. For ddi, we considered the number of anchor nodes as hyperparameter since the fixed choice of 64 from the experiments of (You et al., 2019) did not yield good results. However, larger numbers such as 128 or 512 resulted in very large runtimes (9 hrs / epoch). The result in Table 3 is an intermediate best value after 50 runs. We contacted the authors and they indeed mentioned that the model is not very scalable and suggested to use just the anchor-set distance as additional features, instead of overtaking the adapted message passing as well. We did not do this extra experiment since the SAGE +dist model, whose numbers we report, follows a similar approach. | 1. What is the focus of the paper in terms of enhancing link prediction performances in GNNs?
2. What are the strengths of the proposed method, particularly in its simplicity and ease of understanding?
3. What are the weaknesses of the paper regarding its contributions and lack of conceptual or theoretical insights?
4. How does the reviewer assess the effectiveness of the proposed virtual nodes in improving link prediction?
5. How does the reviewer view the overall quality of the paper after the rebuttal, considering its potential interest and relevance to the community? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to enhance the link prediction performance of GNNs by adding multiple virtual nodes. The vortual nodes are designed in different ways, and the experiments analyze the influence of adding virtual nodes from different perspectives.
Review
Overall, the paper is well written and the proposed methods are clearly explained and are easy to understand. And the experiments covers several different perspectives of the influence of the virtual nodes. Everything is clearly introduced and I do not have any question regarding the proposed model.
However, the contribution of the paper is mainly from the relatively narrow engineering perspective (a technique to improve the performance) without much conceptual or theoretical insights. It analyze the very specific problem of adding virtual nodes to enhance the link prediction performance. Besides, the designs of different virtual nodes are a little bit ad-hoc. Therefore, the studies on problem of whether virtual nodes will help link prediction seem not systematic enough.
----------post rebuttal----------
After rebuttal, the authors explained the rationale behind using virtual nodes for link prediction to some extend. Although there is still large sapce for improvement on the analysis proposed in the paper, the attempt on analyzing virtual nodes' influence on link prediction may catch some attention and interests from the community. Therefore, I think the paper can be rated as 'marginally above the threshold'. |
ICLR | Title
Towards Automatic Generation of Advanced Shift Networks
Abstract
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
N/A
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
1 INTRODUCTION
In recent years, large-scale commercial applications based on convolutional neural networks (CNNs) have prompted researchers to design more efficient networks, which can be deployed on platforms with limited resource budgets, such as mobile or IoT devices. Early works utilized network quantization (Cheng et al., 2017) to achieve this goal, by replacing high-precision model parameters with smaller bit-width representations. It can reduce the computational cost of model execution, but also suffer from a non-negligible performance degradation, especially on complex datasets (e.g., ImageNet). To address this issue, recent works (Zhou et al., 2017; Elhoushi et al., 2021) turned to using binary bit shifts rather than simple quantized bits to replace floating-point model parameters.
The key insight of these solutions is that multiplying an element by a power of 2 is mathematically equivalent to a bit-shift operation on it, which is computationally much cheaper and hardware-friendly. Based on this, researchers designed different types of bit-shift techniques (Zhou et al., 2017; Elhoushi et al., 2021; Li et al., 2021; 2022), which show promising overhead reduction in model execution. However, all these solutions only focus on designing advanced weight quantization algorithms to reduce the accuracy gap between shift networks and their real-valued counterparts, where the backbone models are all directly transferred from conventional CNNs, e.g., ResNets (He et al., 2016) and VGG (Simonyan & Zisserman, 2014). Given these CNN models are all designed for the continuous real-valued domain, such direct conversion would restrict the potential of bit-shift techniques, causing less optimal network architecture with a non-trivial accuracy drop.
To overcome this limitation, we aim to design advanced shift networks from another perspective, i.e., searching for network architectures that are more compatible with the bit-shift quantization. This is inspired by the Neural Architecture Search (NAS) technique, which can automatically identify the satisfactory network architecture for a given task. The searched models have shown better performance than carefully hand-crafted models (Liu et al., 2018b; Chen et al., 2019). One straightforward way is to directly transfer NAS models searched from real-valued domains to bit-shift
networks. However, similar to the manually-crafted networks, such strategy also leads to sub-optimal results due to the semantic gap between real and bit-shift domains (Sections 3 and 5.4).
For the first time, we present AutoShiftNet, a novel methodology to automatically search for the optimal bit-shift network architectures directly, aiming to reduce the accuracy drop from the state-of-the-art real-valued models. Moreover, the introduction of bit-shift operations can significantly reduce the searching, training and inference cost, which can facilitate the deployment of large models on dedicated hardware. Specifically, AutoShiftNet contains three components: (1) Shift-oriented search space. While existing NAS techniques mainly focus on the real-valued domain, we are the first to construct a new search space composed of bit-shift operations and design the corresponding forward and backward pass. (2) Topology-related search strategy. Since shift networks tend to have faster gradient descent or even vanishing gradient (Elhoushi et al., 2021), they are more vulnerable to the conventional gradient-based NAS techniques, i.e., searched networks can be dominated by skip connections (Liu et al., 2018a). Therefore, we decouple the search of model operations and topology, which can efficiently mitigate this issue (Gu et al., 2021). (3) Search regularization and stabilization. Given the weight sign freezing effect (Li et al., 2021) and unstable training process, we adopt multiple approaches to regularize and stabilize the search procedure, including shift-adaptive L2 regularization, learning rate reset scheme and shift weight re-parameterization.
We clarify that our work is orthogonal to and different from ShiftAddNAS (You et al., 2022), which aims to search for more accurate models from a hybrid search space with four operations (Attention, Convolution, Shift and Add). Although ShiftAddNAS also considers bit-shift operations, it actually still focuses on multiplication operations as they can provide much higher prediction accuracy. The model searched by ShiftAddNAS is still dominated by multiplications while the shift operations only take a very small part (ShiftAddNAS-T1↑ contains 7.1G multiplications and 8.5G additions, but only 1.4G shifts). Such model cannot be regarded as an actual shiftnet, and is difficult to be deployed on resource-constrained mobile devices, as the number of multiply-add operations is normally restricted below 600M for an ImageNet-mobile setting (Dong & Yang, 2019). In contrast, AutoShiftNet totally removes multiplications and only considers efficient bit-shifts and additions. The searched model only contains about 300M additions, so that it is more compatible for the bit-shift domain and also more practical for real-world applications on resource-restricted edge devices.
The networks searched by AutoShiftNet show much better performance than conventional CNNs in the bit-shift domain, especially when many CNNs fail to converge on large datasets (e.g., ImageNet) with bit-shift weights. AutoShiftNet achieves an accuracy improvement of (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, with more compact parameter sizes and smaller numbers of operation computations. Compared with previous NAS methods, networks from AutoShiftNet are more compatible with the bit-shift domain, which lead to a smaller accuracy drop from the complex real-valued models. More importantly, AutoShiftNet consumes less computing resources and time as it directly searches with the bit-shift weights.
2 PRELIMINARIES
2.1 BIT-SHIFT NETWORK QUANTIZATION
Bit-shift quantization techniques (Zhou et al., 2017; Elhoushi et al., 2021) round the float-point model weights to the powers of 2, so that the intensive multiplications on weights can be achieved with cheaper binary bit shifts. Formally, given a number x and a rounded model weight 2p, their multiplication is mathematically equivalent to shifting p bits of x. Since model weights can be either positive or negative for input feature extraction, while 2p is always positive, a sign flip function flip(w, s) is thus introduced to represent the signs of weight values. This operation is achieved with a ternary sign operator s ∈ {−1, 0,+1}. Finally, we can replace the weight matrix W in the model as: W = flip(2P , S), where P is the shift matrix and S is the sign matrix. Both bit shift and sign flip are computationally cheap, as the former is the fundamental operation in modern processors and the latter just computes 2’s complement of a number. Therefore, such weight replacement can efficiently reduce the computation cost of CNN model execution.
2.2 NEURAL ARCHITECTURE SEARCH
NAS has gained great popularity in recent years, due to its capability of building machine learning pipelines with high efficiency and automation. Early methods used reinforcement learning (Zoph & Le, 2016) and evolutionary algorithms (Real et al., 2019) to search for optimal network architectures for a given task, which normally takes thousands of GPU hours. Recent works tended to use a gradientbased strategy (Liu et al., 2018b) that can reduce the search cost to a few hours. Such methods usually aim at searching for optimal cell structures, since stacking cells as a model is more efficient than searching the whole network architecture. Formally, a cell is represented as a directed cyclic graph (i.e., supernet) with N nodes {xi}Ni=1, including two inputs and one output, and several intermediate nodes. The j-th intermediate node xj connects to all previous nodes xi through the edge (i, j). The operation choice over the edge (i, j) can be relaxed as o(i,j)(x) = ∑ α (i,j) o o(i,j)(xi), where o ∈ O, andO denotes the search space of candidate operations. α(i,j)o is the trainable weight for each operation on the edge (i, j), which is normalized with the softmax function. Therefore, the feature map of node xj can be computed by adding all results from its predecessors xi: xj = ∑ o(i,j)(xi). Let Ltrain and Lval denote the model loss on the training and validation sets. A bi-level optimization is applied to the operation weight α and network weight w as:
min α Lval(w∗(α), α), s.t. w∗(α) = argmin w (Ltrain(w,α)) (1)
The final model architecture can be derived from the trained operation weight α by retaining operations with the largest weight and pruning edges with the smaller weight.
3 OVERVIEW OF AutoShiftNet
The main idea is to automatically generate well-performed bit-shift networks with high efficiency. Challenges arise when we apply exiting NAS techniques for searching bit-shift networks:
Design of shift-oriented search space. Given that existing NAS methods mainly focus on the real-valued models, their search spaces are also designed for real domain, which is not applicable to bit-shift models. Specifically, a conventional NAS search space normally consists of multiple manually defined operations, such as dilated convolutions and separable convolutions. To build the shift-oriented search space, we need to transfer these operations from the real domain into the bit-shift domain, in which the forward pass and backward pass need to be carefully designed.
Dominance of skip connections. While most of recent NAS methods adopt the gradient-based search strategy (i.e., DARTS (Liu et al., 2018b)), it has a big drawback: the searched networks are easy to be dominated by skip connections (Chen et al., 2019), as the strategy prefers the fastest way of gradient descent. Unfortunately, searching in the bit-shift domain inherits and amplifies this drawback, which would lead to the ”cell collapsing” of searched architectures. Hence, a new search strategy considering both the model operations and topology should be adopted.
Less robust search procedure. Replacing floating-point weights with bit shifts brings fast computations, but also results in the accuracy drop and difficulty of model training. Specifically, the introduced shift parameters and sign flips should be well regularized to avoid errors in the gradient descent. Besides, since bit-shift operations are extremely sensitive to a large learning rate, the selection and scheduling of the learning rate should also be carefully crafted.
We design a novel NAS technique AutoShiftNet to address the above challenges. Figure 1 shows the overview of our methodology, which consists of three key components:
• Shift-oriented search space. This new search space consists of 8 operations, which are converted from the real domain to bit-shift domain.
• Topology-related search strategy. This new strategy considers the optimal combination of model operations and topology, which can efficiently mitigate the dominance of skip connections.
• Search regularization and stabilisation. Three approaches are proposed to regularize and stabilize the search procedure: applying a shift-adaptive L2 regularization for shift parameters and resetting the learning rate during search.
4 METHODOLOGY
4.1 SHIFT-ORIENTED SEARCH SPACE
Following previous NAS works (e.g., DARTS (Liu et al., 2018b)), we adopt 8 operations as our operation search spaceO: 3×3 and 5×5 dilated convolutions, 3×3 and 5×5 separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, identity (skip) and the zero1. To construct a shiftoriented search space, we group and transfer these operations into the bit-shift domain and study the corresponding forward and backward pass computations.
Grouping candidate operations. Since not every candidate operation needs to be transferred into the bit-shift version, e.g., the identity and pooling, we first divide 8 candidate operations (excluding zero) into two groups. The first group Oc contains four convolution operations, which involve dense multiplications. The second group Ot contains the remaining operations, which mainly focus on the model topology, such as skip and pooling. The entire search space is denoted as O = {Oc,Ot}. To construct the shift-oriented search space, we just need to transfer operations in Oc into the bit-shift domain, and keep operations in Ot unchanged. Note that this operation group scheme will also be adopted in the topology-related search strategy (Section 4.2).
Replacement of operation weights. As introduced in Section 2.1, quantization of bit-shift networks can be implemented by replacing the floating-point model weights with two parameters: bit shift P and sign flip S. Hence, the weights w of operations in Oc need to be replaced with the trainable parameters (P, S), which is formulated as below:
P = round(P ), S = sign(round(S)), w = flip(2P , S) (2)
where P is the rounded shift matrix and S is the rounded sign matrix. Note that the function sign generates a ternary value, and can be represented as:
sign(s) = { −1 if s ≤ −0.5 0 if − 0.5 < s < 0.5 +1 if s ≥ 0.5
(3)
Designing forward and backward pass. Different from some previous works (Zhou et al., 2017) which just rounded the trained models into the bit-shift domain, our goal is to directly search and train the model in the shift domain. So we need to design and implement the forward and backward pass of shift operations. With the transferred weights w = flip(2P , S), the forward pass for convolutions in Oc can be represented as: Y = w ∗X = flip(2P , S) ∗X + b, where (X,Y ) denote the operation input and output, and b denotes the bias. The gradients of the backward pass can be formulated as:
∂L ∂X = ∂L ∂Y ∂Y ∂X = ∂L ∂Y wT , ∂L ∂P = ∂L ∂Y ∂Y ∂w ∂w ∂P ∂P ∂P , ∂L ∂S = ∂L ∂Y ∂Y ∂w ∂w ∂S ∂S ∂S , ∂L ∂b = ∂L ∂Y
(4)
where L denotes the model loss. We use the straight through estimators (Yin et al., 2019) to compute the derivatives of the round and sign functions as: ∂round(x)∂x ≈ 1 and ∂sign(x) ∂x ≈ 1. For the sign flip function, we have: ∂flip(x,s) ∂x ≈ flip(x, s) and ∂flip(x,s) ∂s ≈ 1. With these estimations, we can set ∂P ∂P ≈ 1 and ∂S ∂S ≈ 1,
1Zero means no connection between two nodes.
and then obtain the following expressions:
∂w ∂S =
∂flip(2P , S)
∂S ≈ 1
∂w ∂P =
∂flip(2P , S)
∂P =
∂flip(2P , S) ∂2P ∂2P ∂P
≈ flip(2P , S)2P ln2 = w2P ln2
(5)
As a result, the gradients of the trainable parameters (P, S) with respect to the model loss L are: ∂L ∂P ≈ ∂L ∂Y ∂Y ∂w w2P ln2, ∂L ∂S ≈ ∂L ∂Y ∂Y ∂w (6) Based on the above constructed forward and backward pass of bit-shift operations, we can achieve searching and training a NAS model in the bit-shift domain.
4.2 TOPOLOGY-RELATED SEARCH STRATEGY
The dominance of skip connections caused by the gradient-based search strategy is a major restriction for applying NAS techniques to quantized networks (Bulat et al., 2020). Besides, ignoring the model topology during a search in some NAS methods also limits the generation of optimal network architectures. Hence, we determine to decouple the operation search and topology search. This search strategy can efficiently suppress the dominance of skip-connections and also improve the performance of searched networks.
Operation search. As introduced in Section 4.1, the 8 candidate operations in the shift-oriented search space can be divided into two groups: Ot contains topology-related operations that can explicitly affect the model topology (e.g., skip), while operations in Oc do not have such impact. Therefore, the operation search spaceO is split into two subspacesO = {Ot,Oc}, and each operation subspace is relaxed to be continuous independently. Then a bi-level optimization is applied to train the model weight w and operation weight α. With the trained α, we retain the operation with the maximum weight in each operation subspace, which can be formulated as:
o (i,j) t = arg max ot∈Ot α(i,j)ot , o (i,j) c = arg max oc∈Oc α(i,j)oc (7)
Such group operation scheme can avoid the elimination of potential topology choices during the operation search, which then allows the subsequent topology search to find out the optimal topology. Finally, all the retained operations are collected to construct a new operation search space ON = {o(i,j)t , o (i,j) c } on each edge (i, j), which is used for the topology search.
Topology search. The previous operation search step aims to determine the best operations on each edge. In this topology search step, we try to search for the optimal combinations of model edges. It can well prevent skips from dominating the searched model topology.
First, a topology search space is constructed. Following previous works, we restrict two input edges for each node in the cell supernet, so the topology search space Exj for node xj can be represented as a set of all possible pairwise combinations of its incoming edges: Exj = {⟨(i1, j), (i2, j)⟩|0 < i1 < i2 < j}. The topology search space contains C2n = n!2!(n−2)! candidates, where n denotes the number of incoming edges for node xj . Similar to the operation search, we also relax the topology search space Exj to be continuous:
βcxj = exp(β′ c xj/Tβ)∑
c′∈Exj exp(β′c
′ xj/Tβ)) (8)
where βcxj is the topology weight that denotes the normalized probability of the edge combination c ∈ Exj . Tβ(t) = T0θt is the temperature for architecture annealing, which can efficiently bridge the optimization gap between the supernet and child networks (Xie et al., 2018).
Then, the importance weight γ(i,j) for each edge (i, j) can be computed from those combinations containing this edge, which can be formulated as:
γ(i,j) = ∑
c∈Exj ,(i,j)∈c
1
N(c) βcxj (9)
where N(c) is the number of edges in the edge combination c. As a result, the feature map of node xj can be obtained by summing all the incoming edges weighted by the edge importance weight γ(i,j):
xj = ∑ i<j γ(i,j)o(i,j)(xi) (10)
where o(i,j)(xi) denotes the mixed operations on edge (i, j) obtained from the operation search. In the topology search, as the number of candidate operations is largely reduced (i.e., 2 in ON ), we can directly use the one-level optimization to update three weights (w,α, β) in the search.
Determining the architecture. After the operation and topology search, we select the edge combination c with the maximal weight in topology weight β to construct the model topology, and then attach to each edge the operation with the maximal weight in the operation weight α.
4.3 SEARCH REGULARIZATION AND STABILISATION
Based on the shift-oriented search space and topology-related search strategy, an efficient bit-shift network architecture can be identified for each specific task automatically. However, the adoption of bit-shift weights makes the architecture search much more unstable and also leads to more difficult model training. The search process usually converges to a sub-optimal solution, sometimes even cannot converge. So we need to regularize and stabilize the optimization of the three trainable weights during search: network weight w, operation weight α and topology weight β.
For the optimization of the network weight w, note that w consists of the bitwise shift P and sign flip S, i.e., w ← {P, S}. We use an adaptive L2 regularization term to regularize the gradient descent of P , which is defined as ∑ W 2 = ∑ (2PS)2 rather than
the conventional formulation ∑
(P 2 + S2). While most weights in a trained model are rarely larger than
1 (i.e., |2P | < 1), the range of the value of P is also empirically set to be smaller than 0. As a negative parameter, a smaller P instead leads to a larger P 2, which gives a reverse activation to the training loss. Hence, the regularization term should be modified to avoid misguiding the direction of the gradient descent. Formally, the regularized loss L′ can be formulated as: L′ = L + λ2 ∑ (2PS)2, where L denotes the original model loss and λ is the fixed weight decay. Our experiments in Section 5.4 show that this adaptive L2 regularization improves the accuracy of searched architectures.
To stabilize the optimization of the operation weight α and topology weight β, in addition to using the temperature regularization in Eq.(8), we also carefully implement a learning rate reset scheme. Since bit-shift networks are extremely sensitive to large learning rates, we need to use a much smaller initial learning rate than that in previous NAS techniques to avoid model convergence failure. Besides, while previous works (Gu et al., 2021) adopt the annealed learning rate from the previous operation search step for following topology search, we find that resetting the learning rate to an initial value again at the start of topology search allows to obtain a better network architecture. Figure 2 shows the learning rate curve in the search with the cosine annealing: the learning rate is reset at the 30th epoch, when the topology search starts.
5 EVALUATION
We implement AutoShiftNet with Pytorch. Following previous works (Elhoushi et al., 2021; Zhou et al., 2017), we emulate the precision of an actual bit-shift hardware implementation by rounding the operation input and bias to the 32-bit fixed-point format precision (16-bit for the integer part and 16-bit for the fraction part). The shift parameter P is constrained in [-15, 0], i.e., the absolute value of the model weight is within [2−15, 1], which only needs 4 bits to represent. The model weight also needs an extra bit to denote its sign S.
We run evaluations on CIFAR10, CIFAR100 and ImageNet datasets. We comprehensively compare AutoShiftNet with a variety of state-of-the-art CNN models (e.g., ResNet, VGG, MobileNet, ShuffleNet, GoogleNet, SqueezeNet) and NAS models (e.g., NASNet, AmoebaNet, DARTS, GDAS, DOTS). For fair comparisons, these baseline models are trained in the bit-shift domain, unless otherwise specified.
5.1 EVALUATION ON CIFAR
Search settings. The entire search process on CIFAR 10/100 consists of two steps: operation search for 30 epochs and then topology search for 40 epochs. The network skeleton consists of 8 cells (6 normal cells and 2 reduction cells) with the initial channel size of 16. The learning rate is scheduled from 0.01 following the reset scheme in Section 4.3. The search process takes about 5.5 hours on one GeForce RTX 3090 GPU. However, since we emulate the hardware bit-shift operations with software implementation, the search time actually can be significantly shortened on the dedicated hardware platforms. We will discuss more about the search efficiency in Section 5.5. The best cells searched from CIFAR are shown in Appendix C.
Evaluation settings. The evaluation network is composed of 20 cells, including 18 normal cells and 2 reduction cells. We set the initial channel size as 36 and optimize the network via the RAdam optimizer (Liu et al., 2019) with an initial learning rate of 0.01 (cosine annealing to 0) and weight decay of 3e-4. Following the setting in DeepShift, the netowrk is trained from scratch with bit-shift weights for 200 epochs. The batch size is set as 128. Cutout and drop-path with a rate of 0.2 are used to prevent overfitting. The training accuracy curves can be found in Appendix D.
Results analysis. Table 1 shows the evaluation results on CIFAR 10/100 datasets. The bit-shift networks searched by AutoShiftNet achieve 95.58% and 76.35% accuracy on CIFAR10 and CIFAR100, respectively. Compared to conventional manually designed CNNs, AutoShiftNet models lead to a significant performance improvement in the bit-shift domain, where the prediction accuracy increases (1.69∼8.07)% on CIFAR10 and (5.71∼18.09)% on CIFAR100. Moreover, the parameter size of searched networks is also much smaller than most conventional CNNs. Hence, in contrast to directly transferring those CNNs into bit-shift counterparts, AutoShiftNet is a more efficient approach to generate high-quality bit-shift networks, with the improved accuracy, reduced parameter size and automatic design process. We also compare AutoShiftNet with state-of-the-art NAS techniques searched in the real domain, and the results show that our method can find out architectures more compatible to the bit-shift domain. We will discuss more details in Section 5.3.
5.2 EVALUATION ON IMAGENET
Evaluation settings. Following previous works (Liu et al., 2018b; Dong & Yang, 2019), we construct the network for ImageNet with the best cells searched from the CIFAR dataset. The evaluation follows the ImageNet-mobile setting, in which the input size is 224×224. The network consists of 14 cells (12 normal cells and 2 reduction cells) with the initial channel size of 46. We train the network in the bit-shift domain for 90 epochs with a batch size of 1024. The RAdam optimizer with an initial learning rate of 0.01 (warming up in the first 5 epochs and cosine annealing to 0) is used. The training accuracy curves can be found in Appendix D.
Results analysis. Table 2 shows the evaluation results on the ImageNet dataset. It can be found that although some conventional CNNs (e.g., ResNet) still perform well when converted to the
Architecture Acc. (%) Params Multi Add Top-1 Top-5 (M) (M) (M) ResNet18 62.25 83.79 11.7 0 987 ResNet50 69.04 88.61 25.8 0 2053 VGG16* 0.10 0.98 138.5 0 8241 GoogleNet 62.81 84.81 6.6 0 752 MobileNet-v2* 40.03 65.13 4.7 0 206 ShuffleNet-v2* 37.32 62.26 7.4 0 306 SqueezeNet1 0 29.08 51.96 3.8 0 412 NASNet 66.24 86.24 5.6 0 317 DARTS-v2 64.98 85.18 4.7 0 287 GDAS 65.87 85.95 5.3 0 291 DOTS 66.36 86.23 5.2 0 302 AutoShiftNet (Ours) 67.17 87.38 5.1 0 298
Table 2: Evaluation results on ImageNet. *: The results are the highest accuracy in the training while networks fail to converge.
Architecture Domain Acc. (%) C10 Diff. C100 Diff. ResNet18 R 94.45 - 72.53 -BS 93.20 -1.25 69.11 -3.42 ResNet50 R 95.12 - 74.19 -BS 93.89 -1.23 70.65 -3.54 DARTS(v2) R 96.48 - 78.78 -BS 94.80 -1.68 75.17 -3.61 DARTS- R 95.61 - 76.02 -BS 93.87 -1.74 70.85 -5.17 DOTS R 96.55 - 78.87 -BS 95.13 -1.42 75.05 -3.82 AutoShiftNet R 96.19 - 78.26 -
BS 95.58 -0.61 76.35 -1.91
Table 3: Accuracy of various architectures on CIFAR10 (C10) and CIFAR100 (C100) in the real (R) and bit-shift (BS) domains.
bit-shift domain, there are many more state-of-the-art CNNs giving much lower prediction accuracy or even failing to converge, including VGG16, MobileNet-v2 and ShuffleNet-v2, whose final top-1 accuracy drops to 0.09%, 1.18% and 9.27%, respectively. In contrast, AutoShiftNet can converge robustly and achieve 67.17% top-1 accuracy, which is (4.36∼67.07)% higher than conventional CNNs except ResNet50. Note that the high accuracy of ResNet50 is obtained at the price of much larger parameter size (5×) and more operations (7×). Hence, compared to conventional CNNs, bit-shift networks searched by AutoShiftNet perform better with fewer parameters and operations. The comparison with previous NAS techniques also shows that AutoShiftNet can generate more compatible architectures for bit-shift networks. Given all multiplications in networks are replaced with bit shifts, the number of multi-operations would be 0, which greatly reduces the resource cost and speeds up the model inference.
5.3 REAL-VALUED AND BIT-SHIFT NETWORK COMPARISONS
We compare the accuracy of the same network trained in the real and bit-shift domains, aiming to investigate the accuracy drop of conventional CNNs and NAS models caused by the bit-shift quantization. Table 3 shows the results of some representative networks on the CIFAR datasets. Comparison on ImageNet can be found in Appendix E. We can observe that AutoShiftNet not only achieves the highest accuracy of bit-shift networks, but also leads to the smallest accuracy drop (-0.61% and -1.91%) when the network is quantized from the real to bit-shift domains. In comparison, conventional CNNs have lower accuracy in the real domain, and the accuracy drops more significantly during the bit-shift quantization.
We further compare AutoShiftNet with previous NAS techniques. From Table 3, AutoShiftNet is able to obtain network architectures with better performance in the bit-shift domain, even their accuracy in the real domain is slightly lower. It indicates that transferring existing NAS models directly to the corresponding bit-shift networks normally just achieves sub-optimal solutions. The networks searched by AutoShiftNet are more compatible to the bit-shift quantization.
5.4 ABLATION STUDY
Impact of the shift-oriented search space. The superiority of AutoShiftNet in the bit-shift domain actually has indicated the effectiveness of the shift-oriented search space, which avoids converging to sub-optimal solutions for searching bit-shift network architectures. To further validate the importance of this new search space, we replace the search space with the classical real-valued one in AutoShiftNet, and then check the performance of the searched results. Four experiments are run individually with random seeds, where the searched architectures achieve average accuracy of 94.97% on CIFAR10 and 75.03% on CIFAR100. It drops 0.63% and 1.32% from that with the shift-oriented search space. Besides, as a by-product, the shift-oriented search space significantly reduces the resource cost in the search process, as it replaces dense multiplications with much cheaper bit shifts. Hence, AutoShiftNet can generate better bit-shift networks automatically with much less resource budget.
Impact of the topology-related search strategy. We take DARTS as the baseline strategy to derive cell structures from the shift-oriented search space. The result is shown in Figure 3a. It can be seen that the searched cell is dominated by the skip connections and only achieves 69.58% accuracy on
CIFAR100. This is because the drawback of the traditional gradient-based search strategy is amplified in the bit-shift domain. By integrating our topology-related search strategy, this drawback can be effectively mitigated and the searched result is shown in Figure 3b. Since the edge connections are further inspected, the topology-related search strategy can generate more stable architectures and achieve 76.21% accuracy, which is 6.63% improvement over DARTS.
Impact of regularization and stabilization. To evaluate the effectiveness of our modified L2 regularization (L2R) and learning rate reset (LRR) schemes, we compare the performance of networks searched with various scheme combinations (Table 4). We find that while both schemes increase the accuracy of the searched architecture, LRR contributes more than L2R. Figure 4 shows the accuracy curves of the search process on CIFAR10 with or without
LRR. It shows that LRR scheme significantly improves the model accuracy from 74.58% to 84.68%, which makes it more possible to search for better bit-shift networks. Note that at the start of topology search (the 30th epoch), the model gets pruned and retrained, so the accuracy has a sharp drop.
5.5 EFFICIENCY ANALYSIS
Given that modern computer architectures use the binary format to store and calculate data, bitwise operations like bit shift and addition are the atomic units for performing complex computations, including the multiplication. According to (Agner Fog), the floating-point multiplication takes at least 5× of clock cycles than the bit shift. Besides, compared to the hardware implementation of bit shift on the circuit, the multiplier takes at least 9.7× of average power, 1.45× of area and 4.32× of transistors (Asati, 2009). Hence, by replacing floating-point weights with bit shift and sign flip operations, the efficiency of architecture search can be significantly improved over previous NAS techniques that search in the real domain. While our software emulation of AutoShiftNet just takes 5.5 hours, where the bit shift is simulated by multiplying the power of 2, the actual search cost on the dedicated hardware platforms (e.g., FPGA accelerators) would be largely decreased. We deem that accelerating the NAS process with bit shift on the FPGA board is a promising research direction. Besides, since the searched architectures are trained as bit-shift networks, it also reduces the resource cost and time of model training and inference. AutoShiftNet also greatly compresses the storage size of searched networks, as it represents model weights with fewer bits (i.e., 5 bits). This promotes the applications of NAS models on the edge devices, where the memory storage and energy consumption are the main constraints.
6 CONCLUSION AND FUTURE WORK
In this paper, we propose to automatically generate advanced bit-shift networks with a dedicated NAS method AutoShiftNet. We overcome the challenges of applying existing NAS techniques in the bit-shift domain with three innovations: shift-oriented search space, topology-related search strategy and search regularization and stabilization. Experimental results show that AutoShiftNet can search for architectures with higher compatibility for bit-shift operations, and better performance than state-of-the-art CNNs and NAS models.
While replacing model multiplications with bit shifts can efficiently reduce the running cost, it is essentially a coarse-grained representation of model weights, which naturally results in the non-trivial drop of prediction accuracy. To address this, we can further introduce additions into the search space of AutoShiftNet, which are also efficient substitutes of multiplications (Chen et al., 2020) and more importantly, can achieve finer-grained weight manipulation (You et al., 2020). Since current CUDA kernels lack optimization of intensive additions, we leave it as future work.
A ARCHITECTURE SEARCH DETAILS
For the operation search, the official CIFAR training dataset is divided into two halves: training set DT and validation set DV , which are used to optimize network weights w and operation weights α, respectively. The topology search directly uses the whole official training set to optimize the topology weight β with one-level optimization, where the initial temperature T0 is set as 10 and decay to 0.02. We adopt Rectified Adam (RAdam) optimizer with initial learning rate of 0.01 and weight decay of 3e-4 to optimize model weight w and Adam optimizer with initial learning rate of 3e-4 and weight decay of 1e-3 to optimize operation weight α and topology weight β. The learning rate is scheduled with cosine scheduler following our proposed learning rate reset scheme. The search process consists of 70 epochs with the batch size of 128, including 30 epochs for operation search and 40 epochs for topology search.
B ARCHITECTURE EVALUATION DETAILS
Training on CIFAR. We train the evaluation network for 200 epochs with the batch size of 128. The network is optimized by RAdam optimizer with initial learning rate of 0.01 and weight decay of 3e-4. The learning rate is scheduled by a cosine annealing scheduler to 0. Cutout and drop-path with a rate of 0.2 are used for preventing overfitting.
Training on ImageNet. The network is trained by 90 epochs with the batch size of 1024. The RAdam optimizer is adopted, whose initial learning rate is set as 0.01 and weight decay is set as 3e-4. The learning rate is cosine annealed to 0. Label smoothing and an auxiliary loss tower is used to enhance model training.
C BEST SEARCHED CELL STRUCTURES
Table 5 and 6 show the best searched architectures for CIFAR10 and CIFAR100. The evaluation on ImageNet adopts cells searched from CIFAR10 (Table 5).
D TRAINING RESULTS
Figure 5 shows the accuracy traces of training on CIFAR10 and CIFAR100. Figure 6 shows the accuracy traces of training on ImageNet, where (a) takes batch size of 1024 and (b) takes 256. It can be seen that training with batch size of 256 converges earlier and is also more stable, where the final top-1 accuracy is slightly higher (68.67% vs. 67.17%).
E COMPARISON WITH REAL-VALUED COUNTERPARTS ON IMAGENET
Due to the limitation of resource and time, we just select each a model from conventional CNNs (i.e., ResNet18) and previous NAS methods (i.e., DOTS) to compare the accuracy drop from the real-valued counterparts on the ImageNet with our proposed AutoShiftNet. Table 7 shows the results. It can be found that the architecture searched by AutoShiftNet achieves the highest accuracy as a bit-shift network, and also has the lowest accuracy drop from the counterpart training in the real domain. Compared to other conventional CNNs and even most state-of-the-art NAS models, ResNet have more robust performance even training with bit-shift weights. However, it is still worse than our proposed AutoShiftNet, and more importantly, ResNets are much more heavy than NAS searched models. | 1. What is the focus of the paper regarding Neural Architecture Search?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison with other works like DeepShift?
3. Do you have any concerns about plagiarism or improper citation in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can you explain the motivation behind adjusting the DART algorithm for shift networks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper purposes an efficient Neural Architecture Search method adapted to Shift-Neural-Networks. Combined with a few tools to enable training of shift-networks, they show promising results on the CIFAR and Imagenet datasets.
Strengths And Weaknesses
Strength:
The paper's results are promising (e.g. Achieving 95.58% on CIFAR10 with shift-network without pre-training)
A detailed ablation study is included, properly covering all the methods suggested in the paper.
The process of adjusting Network Architecture Search (NAS) to shift-network was innovative and interesting to read.
Weaknesses:
Major:
1.Comparing the paper with the (cited) DeepShift paper raised my concern of plagiarism. I suspect that several sections/paragraphs were copied from the previous paper, with small language/notations modifications.
The first concerning section is in section 4.1. Starting off from "Replacement of operation weights" paragraph, the writing has a very strong resemblance to section 3.2 in the Deep-Shift paper (DeepShift-PS), up until the end of the section. Writing aside, the mathematical equations in both sections evaluated the same thing and arrived at the same conclusion. At best, the section is redundant.
Possibly more concerning is writing in section 4.3 (SEARCH REGULARIZATION AND STABILISATION). Here, the explanation regarding the use of adaptive L2-regularization is very similar to what was presented in section 4 (Implementation) of the DeepShift paper. This particular case was more concerning to me because, on my first read, I considered this part to be a novelty of the paper (Deepshift wasn't properly cited).
Other weaknesses:
I found the presented results to be somewhat confusing. It wasn't immediately clear that all the results in table 1 use a shift model, and it is still unclear to me what the criteria is, and why the accuracy/ parameters are inconsistent with the number of layers (e.g., Resnet 20 being orders of magnitudes smaller than resnet18). To my understanding, the accuracy results match the results of DeepShift-PS networks (5 bit weights, 32 bit activations), with quantized aware training (and not pretrained). According to the DeepShift paper, DeepShift-Q and pretrained models have higher results on this task, so I would appreciate an explanation regarding the choice of baseline.
The baseline results for Imagenet are not consistent with DeepShift-PS results, and I am not sure where the gap is coming from. For example, DeepShift-PS (5W 32A) was reported to achieve 76% on Imagenet, a far cry from the 69% specified for the baseline at table 2 (Which is already above the paper's result).
The main motivation behind the adjustments to the DART algorithm was its tendency to select skip operations during the search, but I did not understand why this is the case and why is this problem worse for shift networks.
Clarity, Quality, Novelty And Reproducibility
Clarity:
see: weakness 1
Novelty:
Applying NAS to shift-neural networks is a novel idea. The paper mentions another work that did that (ShiftAddNAS), and clarifies the differences. Other methods included in this paper were not novel, and the paper lacked proper citation of the original papers. (see weakness 1)
Quality/ Impact:
The main potential of the paper is in the algorithm suggested for NAS. While the paper does present some empirical results in support of the search algorithm, but I am yet to be convinced that the algorithm produced good enough networks. (see: Weakness 2/3)
Reproducibility:
The paper did not provide code. |
ICLR | Title
Towards Automatic Generation of Advanced Shift Networks
Abstract
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
N/A
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
1 INTRODUCTION
In recent years, large-scale commercial applications based on convolutional neural networks (CNNs) have prompted researchers to design more efficient networks, which can be deployed on platforms with limited resource budgets, such as mobile or IoT devices. Early works utilized network quantization (Cheng et al., 2017) to achieve this goal, by replacing high-precision model parameters with smaller bit-width representations. It can reduce the computational cost of model execution, but also suffer from a non-negligible performance degradation, especially on complex datasets (e.g., ImageNet). To address this issue, recent works (Zhou et al., 2017; Elhoushi et al., 2021) turned to using binary bit shifts rather than simple quantized bits to replace floating-point model parameters.
The key insight of these solutions is that multiplying an element by a power of 2 is mathematically equivalent to a bit-shift operation on it, which is computationally much cheaper and hardware-friendly. Based on this, researchers designed different types of bit-shift techniques (Zhou et al., 2017; Elhoushi et al., 2021; Li et al., 2021; 2022), which show promising overhead reduction in model execution. However, all these solutions only focus on designing advanced weight quantization algorithms to reduce the accuracy gap between shift networks and their real-valued counterparts, where the backbone models are all directly transferred from conventional CNNs, e.g., ResNets (He et al., 2016) and VGG (Simonyan & Zisserman, 2014). Given these CNN models are all designed for the continuous real-valued domain, such direct conversion would restrict the potential of bit-shift techniques, causing less optimal network architecture with a non-trivial accuracy drop.
To overcome this limitation, we aim to design advanced shift networks from another perspective, i.e., searching for network architectures that are more compatible with the bit-shift quantization. This is inspired by the Neural Architecture Search (NAS) technique, which can automatically identify the satisfactory network architecture for a given task. The searched models have shown better performance than carefully hand-crafted models (Liu et al., 2018b; Chen et al., 2019). One straightforward way is to directly transfer NAS models searched from real-valued domains to bit-shift
networks. However, similar to the manually-crafted networks, such strategy also leads to sub-optimal results due to the semantic gap between real and bit-shift domains (Sections 3 and 5.4).
For the first time, we present AutoShiftNet, a novel methodology to automatically search for the optimal bit-shift network architectures directly, aiming to reduce the accuracy drop from the state-of-the-art real-valued models. Moreover, the introduction of bit-shift operations can significantly reduce the searching, training and inference cost, which can facilitate the deployment of large models on dedicated hardware. Specifically, AutoShiftNet contains three components: (1) Shift-oriented search space. While existing NAS techniques mainly focus on the real-valued domain, we are the first to construct a new search space composed of bit-shift operations and design the corresponding forward and backward pass. (2) Topology-related search strategy. Since shift networks tend to have faster gradient descent or even vanishing gradient (Elhoushi et al., 2021), they are more vulnerable to the conventional gradient-based NAS techniques, i.e., searched networks can be dominated by skip connections (Liu et al., 2018a). Therefore, we decouple the search of model operations and topology, which can efficiently mitigate this issue (Gu et al., 2021). (3) Search regularization and stabilization. Given the weight sign freezing effect (Li et al., 2021) and unstable training process, we adopt multiple approaches to regularize and stabilize the search procedure, including shift-adaptive L2 regularization, learning rate reset scheme and shift weight re-parameterization.
We clarify that our work is orthogonal to and different from ShiftAddNAS (You et al., 2022), which aims to search for more accurate models from a hybrid search space with four operations (Attention, Convolution, Shift and Add). Although ShiftAddNAS also considers bit-shift operations, it actually still focuses on multiplication operations as they can provide much higher prediction accuracy. The model searched by ShiftAddNAS is still dominated by multiplications while the shift operations only take a very small part (ShiftAddNAS-T1↑ contains 7.1G multiplications and 8.5G additions, but only 1.4G shifts). Such model cannot be regarded as an actual shiftnet, and is difficult to be deployed on resource-constrained mobile devices, as the number of multiply-add operations is normally restricted below 600M for an ImageNet-mobile setting (Dong & Yang, 2019). In contrast, AutoShiftNet totally removes multiplications and only considers efficient bit-shifts and additions. The searched model only contains about 300M additions, so that it is more compatible for the bit-shift domain and also more practical for real-world applications on resource-restricted edge devices.
The networks searched by AutoShiftNet show much better performance than conventional CNNs in the bit-shift domain, especially when many CNNs fail to converge on large datasets (e.g., ImageNet) with bit-shift weights. AutoShiftNet achieves an accuracy improvement of (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, with more compact parameter sizes and smaller numbers of operation computations. Compared with previous NAS methods, networks from AutoShiftNet are more compatible with the bit-shift domain, which lead to a smaller accuracy drop from the complex real-valued models. More importantly, AutoShiftNet consumes less computing resources and time as it directly searches with the bit-shift weights.
2 PRELIMINARIES
2.1 BIT-SHIFT NETWORK QUANTIZATION
Bit-shift quantization techniques (Zhou et al., 2017; Elhoushi et al., 2021) round the float-point model weights to the powers of 2, so that the intensive multiplications on weights can be achieved with cheaper binary bit shifts. Formally, given a number x and a rounded model weight 2p, their multiplication is mathematically equivalent to shifting p bits of x. Since model weights can be either positive or negative for input feature extraction, while 2p is always positive, a sign flip function flip(w, s) is thus introduced to represent the signs of weight values. This operation is achieved with a ternary sign operator s ∈ {−1, 0,+1}. Finally, we can replace the weight matrix W in the model as: W = flip(2P , S), where P is the shift matrix and S is the sign matrix. Both bit shift and sign flip are computationally cheap, as the former is the fundamental operation in modern processors and the latter just computes 2’s complement of a number. Therefore, such weight replacement can efficiently reduce the computation cost of CNN model execution.
2.2 NEURAL ARCHITECTURE SEARCH
NAS has gained great popularity in recent years, due to its capability of building machine learning pipelines with high efficiency and automation. Early methods used reinforcement learning (Zoph & Le, 2016) and evolutionary algorithms (Real et al., 2019) to search for optimal network architectures for a given task, which normally takes thousands of GPU hours. Recent works tended to use a gradientbased strategy (Liu et al., 2018b) that can reduce the search cost to a few hours. Such methods usually aim at searching for optimal cell structures, since stacking cells as a model is more efficient than searching the whole network architecture. Formally, a cell is represented as a directed cyclic graph (i.e., supernet) with N nodes {xi}Ni=1, including two inputs and one output, and several intermediate nodes. The j-th intermediate node xj connects to all previous nodes xi through the edge (i, j). The operation choice over the edge (i, j) can be relaxed as o(i,j)(x) = ∑ α (i,j) o o(i,j)(xi), where o ∈ O, andO denotes the search space of candidate operations. α(i,j)o is the trainable weight for each operation on the edge (i, j), which is normalized with the softmax function. Therefore, the feature map of node xj can be computed by adding all results from its predecessors xi: xj = ∑ o(i,j)(xi). Let Ltrain and Lval denote the model loss on the training and validation sets. A bi-level optimization is applied to the operation weight α and network weight w as:
min α Lval(w∗(α), α), s.t. w∗(α) = argmin w (Ltrain(w,α)) (1)
The final model architecture can be derived from the trained operation weight α by retaining operations with the largest weight and pruning edges with the smaller weight.
3 OVERVIEW OF AutoShiftNet
The main idea is to automatically generate well-performed bit-shift networks with high efficiency. Challenges arise when we apply exiting NAS techniques for searching bit-shift networks:
Design of shift-oriented search space. Given that existing NAS methods mainly focus on the real-valued models, their search spaces are also designed for real domain, which is not applicable to bit-shift models. Specifically, a conventional NAS search space normally consists of multiple manually defined operations, such as dilated convolutions and separable convolutions. To build the shift-oriented search space, we need to transfer these operations from the real domain into the bit-shift domain, in which the forward pass and backward pass need to be carefully designed.
Dominance of skip connections. While most of recent NAS methods adopt the gradient-based search strategy (i.e., DARTS (Liu et al., 2018b)), it has a big drawback: the searched networks are easy to be dominated by skip connections (Chen et al., 2019), as the strategy prefers the fastest way of gradient descent. Unfortunately, searching in the bit-shift domain inherits and amplifies this drawback, which would lead to the ”cell collapsing” of searched architectures. Hence, a new search strategy considering both the model operations and topology should be adopted.
Less robust search procedure. Replacing floating-point weights with bit shifts brings fast computations, but also results in the accuracy drop and difficulty of model training. Specifically, the introduced shift parameters and sign flips should be well regularized to avoid errors in the gradient descent. Besides, since bit-shift operations are extremely sensitive to a large learning rate, the selection and scheduling of the learning rate should also be carefully crafted.
We design a novel NAS technique AutoShiftNet to address the above challenges. Figure 1 shows the overview of our methodology, which consists of three key components:
• Shift-oriented search space. This new search space consists of 8 operations, which are converted from the real domain to bit-shift domain.
• Topology-related search strategy. This new strategy considers the optimal combination of model operations and topology, which can efficiently mitigate the dominance of skip connections.
• Search regularization and stabilisation. Three approaches are proposed to regularize and stabilize the search procedure: applying a shift-adaptive L2 regularization for shift parameters and resetting the learning rate during search.
4 METHODOLOGY
4.1 SHIFT-ORIENTED SEARCH SPACE
Following previous NAS works (e.g., DARTS (Liu et al., 2018b)), we adopt 8 operations as our operation search spaceO: 3×3 and 5×5 dilated convolutions, 3×3 and 5×5 separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, identity (skip) and the zero1. To construct a shiftoriented search space, we group and transfer these operations into the bit-shift domain and study the corresponding forward and backward pass computations.
Grouping candidate operations. Since not every candidate operation needs to be transferred into the bit-shift version, e.g., the identity and pooling, we first divide 8 candidate operations (excluding zero) into two groups. The first group Oc contains four convolution operations, which involve dense multiplications. The second group Ot contains the remaining operations, which mainly focus on the model topology, such as skip and pooling. The entire search space is denoted as O = {Oc,Ot}. To construct the shift-oriented search space, we just need to transfer operations in Oc into the bit-shift domain, and keep operations in Ot unchanged. Note that this operation group scheme will also be adopted in the topology-related search strategy (Section 4.2).
Replacement of operation weights. As introduced in Section 2.1, quantization of bit-shift networks can be implemented by replacing the floating-point model weights with two parameters: bit shift P and sign flip S. Hence, the weights w of operations in Oc need to be replaced with the trainable parameters (P, S), which is formulated as below:
P = round(P ), S = sign(round(S)), w = flip(2P , S) (2)
where P is the rounded shift matrix and S is the rounded sign matrix. Note that the function sign generates a ternary value, and can be represented as:
sign(s) = { −1 if s ≤ −0.5 0 if − 0.5 < s < 0.5 +1 if s ≥ 0.5
(3)
Designing forward and backward pass. Different from some previous works (Zhou et al., 2017) which just rounded the trained models into the bit-shift domain, our goal is to directly search and train the model in the shift domain. So we need to design and implement the forward and backward pass of shift operations. With the transferred weights w = flip(2P , S), the forward pass for convolutions in Oc can be represented as: Y = w ∗X = flip(2P , S) ∗X + b, where (X,Y ) denote the operation input and output, and b denotes the bias. The gradients of the backward pass can be formulated as:
∂L ∂X = ∂L ∂Y ∂Y ∂X = ∂L ∂Y wT , ∂L ∂P = ∂L ∂Y ∂Y ∂w ∂w ∂P ∂P ∂P , ∂L ∂S = ∂L ∂Y ∂Y ∂w ∂w ∂S ∂S ∂S , ∂L ∂b = ∂L ∂Y
(4)
where L denotes the model loss. We use the straight through estimators (Yin et al., 2019) to compute the derivatives of the round and sign functions as: ∂round(x)∂x ≈ 1 and ∂sign(x) ∂x ≈ 1. For the sign flip function, we have: ∂flip(x,s) ∂x ≈ flip(x, s) and ∂flip(x,s) ∂s ≈ 1. With these estimations, we can set ∂P ∂P ≈ 1 and ∂S ∂S ≈ 1,
1Zero means no connection between two nodes.
and then obtain the following expressions:
∂w ∂S =
∂flip(2P , S)
∂S ≈ 1
∂w ∂P =
∂flip(2P , S)
∂P =
∂flip(2P , S) ∂2P ∂2P ∂P
≈ flip(2P , S)2P ln2 = w2P ln2
(5)
As a result, the gradients of the trainable parameters (P, S) with respect to the model loss L are: ∂L ∂P ≈ ∂L ∂Y ∂Y ∂w w2P ln2, ∂L ∂S ≈ ∂L ∂Y ∂Y ∂w (6) Based on the above constructed forward and backward pass of bit-shift operations, we can achieve searching and training a NAS model in the bit-shift domain.
4.2 TOPOLOGY-RELATED SEARCH STRATEGY
The dominance of skip connections caused by the gradient-based search strategy is a major restriction for applying NAS techniques to quantized networks (Bulat et al., 2020). Besides, ignoring the model topology during a search in some NAS methods also limits the generation of optimal network architectures. Hence, we determine to decouple the operation search and topology search. This search strategy can efficiently suppress the dominance of skip-connections and also improve the performance of searched networks.
Operation search. As introduced in Section 4.1, the 8 candidate operations in the shift-oriented search space can be divided into two groups: Ot contains topology-related operations that can explicitly affect the model topology (e.g., skip), while operations in Oc do not have such impact. Therefore, the operation search spaceO is split into two subspacesO = {Ot,Oc}, and each operation subspace is relaxed to be continuous independently. Then a bi-level optimization is applied to train the model weight w and operation weight α. With the trained α, we retain the operation with the maximum weight in each operation subspace, which can be formulated as:
o (i,j) t = arg max ot∈Ot α(i,j)ot , o (i,j) c = arg max oc∈Oc α(i,j)oc (7)
Such group operation scheme can avoid the elimination of potential topology choices during the operation search, which then allows the subsequent topology search to find out the optimal topology. Finally, all the retained operations are collected to construct a new operation search space ON = {o(i,j)t , o (i,j) c } on each edge (i, j), which is used for the topology search.
Topology search. The previous operation search step aims to determine the best operations on each edge. In this topology search step, we try to search for the optimal combinations of model edges. It can well prevent skips from dominating the searched model topology.
First, a topology search space is constructed. Following previous works, we restrict two input edges for each node in the cell supernet, so the topology search space Exj for node xj can be represented as a set of all possible pairwise combinations of its incoming edges: Exj = {⟨(i1, j), (i2, j)⟩|0 < i1 < i2 < j}. The topology search space contains C2n = n!2!(n−2)! candidates, where n denotes the number of incoming edges for node xj . Similar to the operation search, we also relax the topology search space Exj to be continuous:
βcxj = exp(β′ c xj/Tβ)∑
c′∈Exj exp(β′c
′ xj/Tβ)) (8)
where βcxj is the topology weight that denotes the normalized probability of the edge combination c ∈ Exj . Tβ(t) = T0θt is the temperature for architecture annealing, which can efficiently bridge the optimization gap between the supernet and child networks (Xie et al., 2018).
Then, the importance weight γ(i,j) for each edge (i, j) can be computed from those combinations containing this edge, which can be formulated as:
γ(i,j) = ∑
c∈Exj ,(i,j)∈c
1
N(c) βcxj (9)
where N(c) is the number of edges in the edge combination c. As a result, the feature map of node xj can be obtained by summing all the incoming edges weighted by the edge importance weight γ(i,j):
xj = ∑ i<j γ(i,j)o(i,j)(xi) (10)
where o(i,j)(xi) denotes the mixed operations on edge (i, j) obtained from the operation search. In the topology search, as the number of candidate operations is largely reduced (i.e., 2 in ON ), we can directly use the one-level optimization to update three weights (w,α, β) in the search.
Determining the architecture. After the operation and topology search, we select the edge combination c with the maximal weight in topology weight β to construct the model topology, and then attach to each edge the operation with the maximal weight in the operation weight α.
4.3 SEARCH REGULARIZATION AND STABILISATION
Based on the shift-oriented search space and topology-related search strategy, an efficient bit-shift network architecture can be identified for each specific task automatically. However, the adoption of bit-shift weights makes the architecture search much more unstable and also leads to more difficult model training. The search process usually converges to a sub-optimal solution, sometimes even cannot converge. So we need to regularize and stabilize the optimization of the three trainable weights during search: network weight w, operation weight α and topology weight β.
For the optimization of the network weight w, note that w consists of the bitwise shift P and sign flip S, i.e., w ← {P, S}. We use an adaptive L2 regularization term to regularize the gradient descent of P , which is defined as ∑ W 2 = ∑ (2PS)2 rather than
the conventional formulation ∑
(P 2 + S2). While most weights in a trained model are rarely larger than
1 (i.e., |2P | < 1), the range of the value of P is also empirically set to be smaller than 0. As a negative parameter, a smaller P instead leads to a larger P 2, which gives a reverse activation to the training loss. Hence, the regularization term should be modified to avoid misguiding the direction of the gradient descent. Formally, the regularized loss L′ can be formulated as: L′ = L + λ2 ∑ (2PS)2, where L denotes the original model loss and λ is the fixed weight decay. Our experiments in Section 5.4 show that this adaptive L2 regularization improves the accuracy of searched architectures.
To stabilize the optimization of the operation weight α and topology weight β, in addition to using the temperature regularization in Eq.(8), we also carefully implement a learning rate reset scheme. Since bit-shift networks are extremely sensitive to large learning rates, we need to use a much smaller initial learning rate than that in previous NAS techniques to avoid model convergence failure. Besides, while previous works (Gu et al., 2021) adopt the annealed learning rate from the previous operation search step for following topology search, we find that resetting the learning rate to an initial value again at the start of topology search allows to obtain a better network architecture. Figure 2 shows the learning rate curve in the search with the cosine annealing: the learning rate is reset at the 30th epoch, when the topology search starts.
5 EVALUATION
We implement AutoShiftNet with Pytorch. Following previous works (Elhoushi et al., 2021; Zhou et al., 2017), we emulate the precision of an actual bit-shift hardware implementation by rounding the operation input and bias to the 32-bit fixed-point format precision (16-bit for the integer part and 16-bit for the fraction part). The shift parameter P is constrained in [-15, 0], i.e., the absolute value of the model weight is within [2−15, 1], which only needs 4 bits to represent. The model weight also needs an extra bit to denote its sign S.
We run evaluations on CIFAR10, CIFAR100 and ImageNet datasets. We comprehensively compare AutoShiftNet with a variety of state-of-the-art CNN models (e.g., ResNet, VGG, MobileNet, ShuffleNet, GoogleNet, SqueezeNet) and NAS models (e.g., NASNet, AmoebaNet, DARTS, GDAS, DOTS). For fair comparisons, these baseline models are trained in the bit-shift domain, unless otherwise specified.
5.1 EVALUATION ON CIFAR
Search settings. The entire search process on CIFAR 10/100 consists of two steps: operation search for 30 epochs and then topology search for 40 epochs. The network skeleton consists of 8 cells (6 normal cells and 2 reduction cells) with the initial channel size of 16. The learning rate is scheduled from 0.01 following the reset scheme in Section 4.3. The search process takes about 5.5 hours on one GeForce RTX 3090 GPU. However, since we emulate the hardware bit-shift operations with software implementation, the search time actually can be significantly shortened on the dedicated hardware platforms. We will discuss more about the search efficiency in Section 5.5. The best cells searched from CIFAR are shown in Appendix C.
Evaluation settings. The evaluation network is composed of 20 cells, including 18 normal cells and 2 reduction cells. We set the initial channel size as 36 and optimize the network via the RAdam optimizer (Liu et al., 2019) with an initial learning rate of 0.01 (cosine annealing to 0) and weight decay of 3e-4. Following the setting in DeepShift, the netowrk is trained from scratch with bit-shift weights for 200 epochs. The batch size is set as 128. Cutout and drop-path with a rate of 0.2 are used to prevent overfitting. The training accuracy curves can be found in Appendix D.
Results analysis. Table 1 shows the evaluation results on CIFAR 10/100 datasets. The bit-shift networks searched by AutoShiftNet achieve 95.58% and 76.35% accuracy on CIFAR10 and CIFAR100, respectively. Compared to conventional manually designed CNNs, AutoShiftNet models lead to a significant performance improvement in the bit-shift domain, where the prediction accuracy increases (1.69∼8.07)% on CIFAR10 and (5.71∼18.09)% on CIFAR100. Moreover, the parameter size of searched networks is also much smaller than most conventional CNNs. Hence, in contrast to directly transferring those CNNs into bit-shift counterparts, AutoShiftNet is a more efficient approach to generate high-quality bit-shift networks, with the improved accuracy, reduced parameter size and automatic design process. We also compare AutoShiftNet with state-of-the-art NAS techniques searched in the real domain, and the results show that our method can find out architectures more compatible to the bit-shift domain. We will discuss more details in Section 5.3.
5.2 EVALUATION ON IMAGENET
Evaluation settings. Following previous works (Liu et al., 2018b; Dong & Yang, 2019), we construct the network for ImageNet with the best cells searched from the CIFAR dataset. The evaluation follows the ImageNet-mobile setting, in which the input size is 224×224. The network consists of 14 cells (12 normal cells and 2 reduction cells) with the initial channel size of 46. We train the network in the bit-shift domain for 90 epochs with a batch size of 1024. The RAdam optimizer with an initial learning rate of 0.01 (warming up in the first 5 epochs and cosine annealing to 0) is used. The training accuracy curves can be found in Appendix D.
Results analysis. Table 2 shows the evaluation results on the ImageNet dataset. It can be found that although some conventional CNNs (e.g., ResNet) still perform well when converted to the
Architecture Acc. (%) Params Multi Add Top-1 Top-5 (M) (M) (M) ResNet18 62.25 83.79 11.7 0 987 ResNet50 69.04 88.61 25.8 0 2053 VGG16* 0.10 0.98 138.5 0 8241 GoogleNet 62.81 84.81 6.6 0 752 MobileNet-v2* 40.03 65.13 4.7 0 206 ShuffleNet-v2* 37.32 62.26 7.4 0 306 SqueezeNet1 0 29.08 51.96 3.8 0 412 NASNet 66.24 86.24 5.6 0 317 DARTS-v2 64.98 85.18 4.7 0 287 GDAS 65.87 85.95 5.3 0 291 DOTS 66.36 86.23 5.2 0 302 AutoShiftNet (Ours) 67.17 87.38 5.1 0 298
Table 2: Evaluation results on ImageNet. *: The results are the highest accuracy in the training while networks fail to converge.
Architecture Domain Acc. (%) C10 Diff. C100 Diff. ResNet18 R 94.45 - 72.53 -BS 93.20 -1.25 69.11 -3.42 ResNet50 R 95.12 - 74.19 -BS 93.89 -1.23 70.65 -3.54 DARTS(v2) R 96.48 - 78.78 -BS 94.80 -1.68 75.17 -3.61 DARTS- R 95.61 - 76.02 -BS 93.87 -1.74 70.85 -5.17 DOTS R 96.55 - 78.87 -BS 95.13 -1.42 75.05 -3.82 AutoShiftNet R 96.19 - 78.26 -
BS 95.58 -0.61 76.35 -1.91
Table 3: Accuracy of various architectures on CIFAR10 (C10) and CIFAR100 (C100) in the real (R) and bit-shift (BS) domains.
bit-shift domain, there are many more state-of-the-art CNNs giving much lower prediction accuracy or even failing to converge, including VGG16, MobileNet-v2 and ShuffleNet-v2, whose final top-1 accuracy drops to 0.09%, 1.18% and 9.27%, respectively. In contrast, AutoShiftNet can converge robustly and achieve 67.17% top-1 accuracy, which is (4.36∼67.07)% higher than conventional CNNs except ResNet50. Note that the high accuracy of ResNet50 is obtained at the price of much larger parameter size (5×) and more operations (7×). Hence, compared to conventional CNNs, bit-shift networks searched by AutoShiftNet perform better with fewer parameters and operations. The comparison with previous NAS techniques also shows that AutoShiftNet can generate more compatible architectures for bit-shift networks. Given all multiplications in networks are replaced with bit shifts, the number of multi-operations would be 0, which greatly reduces the resource cost and speeds up the model inference.
5.3 REAL-VALUED AND BIT-SHIFT NETWORK COMPARISONS
We compare the accuracy of the same network trained in the real and bit-shift domains, aiming to investigate the accuracy drop of conventional CNNs and NAS models caused by the bit-shift quantization. Table 3 shows the results of some representative networks on the CIFAR datasets. Comparison on ImageNet can be found in Appendix E. We can observe that AutoShiftNet not only achieves the highest accuracy of bit-shift networks, but also leads to the smallest accuracy drop (-0.61% and -1.91%) when the network is quantized from the real to bit-shift domains. In comparison, conventional CNNs have lower accuracy in the real domain, and the accuracy drops more significantly during the bit-shift quantization.
We further compare AutoShiftNet with previous NAS techniques. From Table 3, AutoShiftNet is able to obtain network architectures with better performance in the bit-shift domain, even their accuracy in the real domain is slightly lower. It indicates that transferring existing NAS models directly to the corresponding bit-shift networks normally just achieves sub-optimal solutions. The networks searched by AutoShiftNet are more compatible to the bit-shift quantization.
5.4 ABLATION STUDY
Impact of the shift-oriented search space. The superiority of AutoShiftNet in the bit-shift domain actually has indicated the effectiveness of the shift-oriented search space, which avoids converging to sub-optimal solutions for searching bit-shift network architectures. To further validate the importance of this new search space, we replace the search space with the classical real-valued one in AutoShiftNet, and then check the performance of the searched results. Four experiments are run individually with random seeds, where the searched architectures achieve average accuracy of 94.97% on CIFAR10 and 75.03% on CIFAR100. It drops 0.63% and 1.32% from that with the shift-oriented search space. Besides, as a by-product, the shift-oriented search space significantly reduces the resource cost in the search process, as it replaces dense multiplications with much cheaper bit shifts. Hence, AutoShiftNet can generate better bit-shift networks automatically with much less resource budget.
Impact of the topology-related search strategy. We take DARTS as the baseline strategy to derive cell structures from the shift-oriented search space. The result is shown in Figure 3a. It can be seen that the searched cell is dominated by the skip connections and only achieves 69.58% accuracy on
CIFAR100. This is because the drawback of the traditional gradient-based search strategy is amplified in the bit-shift domain. By integrating our topology-related search strategy, this drawback can be effectively mitigated and the searched result is shown in Figure 3b. Since the edge connections are further inspected, the topology-related search strategy can generate more stable architectures and achieve 76.21% accuracy, which is 6.63% improvement over DARTS.
Impact of regularization and stabilization. To evaluate the effectiveness of our modified L2 regularization (L2R) and learning rate reset (LRR) schemes, we compare the performance of networks searched with various scheme combinations (Table 4). We find that while both schemes increase the accuracy of the searched architecture, LRR contributes more than L2R. Figure 4 shows the accuracy curves of the search process on CIFAR10 with or without
LRR. It shows that LRR scheme significantly improves the model accuracy from 74.58% to 84.68%, which makes it more possible to search for better bit-shift networks. Note that at the start of topology search (the 30th epoch), the model gets pruned and retrained, so the accuracy has a sharp drop.
5.5 EFFICIENCY ANALYSIS
Given that modern computer architectures use the binary format to store and calculate data, bitwise operations like bit shift and addition are the atomic units for performing complex computations, including the multiplication. According to (Agner Fog), the floating-point multiplication takes at least 5× of clock cycles than the bit shift. Besides, compared to the hardware implementation of bit shift on the circuit, the multiplier takes at least 9.7× of average power, 1.45× of area and 4.32× of transistors (Asati, 2009). Hence, by replacing floating-point weights with bit shift and sign flip operations, the efficiency of architecture search can be significantly improved over previous NAS techniques that search in the real domain. While our software emulation of AutoShiftNet just takes 5.5 hours, where the bit shift is simulated by multiplying the power of 2, the actual search cost on the dedicated hardware platforms (e.g., FPGA accelerators) would be largely decreased. We deem that accelerating the NAS process with bit shift on the FPGA board is a promising research direction. Besides, since the searched architectures are trained as bit-shift networks, it also reduces the resource cost and time of model training and inference. AutoShiftNet also greatly compresses the storage size of searched networks, as it represents model weights with fewer bits (i.e., 5 bits). This promotes the applications of NAS models on the edge devices, where the memory storage and energy consumption are the main constraints.
6 CONCLUSION AND FUTURE WORK
In this paper, we propose to automatically generate advanced bit-shift networks with a dedicated NAS method AutoShiftNet. We overcome the challenges of applying existing NAS techniques in the bit-shift domain with three innovations: shift-oriented search space, topology-related search strategy and search regularization and stabilization. Experimental results show that AutoShiftNet can search for architectures with higher compatibility for bit-shift operations, and better performance than state-of-the-art CNNs and NAS models.
While replacing model multiplications with bit shifts can efficiently reduce the running cost, it is essentially a coarse-grained representation of model weights, which naturally results in the non-trivial drop of prediction accuracy. To address this, we can further introduce additions into the search space of AutoShiftNet, which are also efficient substitutes of multiplications (Chen et al., 2020) and more importantly, can achieve finer-grained weight manipulation (You et al., 2020). Since current CUDA kernels lack optimization of intensive additions, we leave it as future work.
A ARCHITECTURE SEARCH DETAILS
For the operation search, the official CIFAR training dataset is divided into two halves: training set DT and validation set DV , which are used to optimize network weights w and operation weights α, respectively. The topology search directly uses the whole official training set to optimize the topology weight β with one-level optimization, where the initial temperature T0 is set as 10 and decay to 0.02. We adopt Rectified Adam (RAdam) optimizer with initial learning rate of 0.01 and weight decay of 3e-4 to optimize model weight w and Adam optimizer with initial learning rate of 3e-4 and weight decay of 1e-3 to optimize operation weight α and topology weight β. The learning rate is scheduled with cosine scheduler following our proposed learning rate reset scheme. The search process consists of 70 epochs with the batch size of 128, including 30 epochs for operation search and 40 epochs for topology search.
B ARCHITECTURE EVALUATION DETAILS
Training on CIFAR. We train the evaluation network for 200 epochs with the batch size of 128. The network is optimized by RAdam optimizer with initial learning rate of 0.01 and weight decay of 3e-4. The learning rate is scheduled by a cosine annealing scheduler to 0. Cutout and drop-path with a rate of 0.2 are used for preventing overfitting.
Training on ImageNet. The network is trained by 90 epochs with the batch size of 1024. The RAdam optimizer is adopted, whose initial learning rate is set as 0.01 and weight decay is set as 3e-4. The learning rate is cosine annealed to 0. Label smoothing and an auxiliary loss tower is used to enhance model training.
C BEST SEARCHED CELL STRUCTURES
Table 5 and 6 show the best searched architectures for CIFAR10 and CIFAR100. The evaluation on ImageNet adopts cells searched from CIFAR10 (Table 5).
D TRAINING RESULTS
Figure 5 shows the accuracy traces of training on CIFAR10 and CIFAR100. Figure 6 shows the accuracy traces of training on ImageNet, where (a) takes batch size of 1024 and (b) takes 256. It can be seen that training with batch size of 256 converges earlier and is also more stable, where the final top-1 accuracy is slightly higher (68.67% vs. 67.17%).
E COMPARISON WITH REAL-VALUED COUNTERPARTS ON IMAGENET
Due to the limitation of resource and time, we just select each a model from conventional CNNs (i.e., ResNet18) and previous NAS methods (i.e., DOTS) to compare the accuracy drop from the real-valued counterparts on the ImageNet with our proposed AutoShiftNet. Table 7 shows the results. It can be found that the architecture searched by AutoShiftNet achieves the highest accuracy as a bit-shift network, and also has the lowest accuracy drop from the counterpart training in the real domain. Compared to other conventional CNNs and even most state-of-the-art NAS models, ResNet have more robust performance even training with bit-shift weights. However, it is still worse than our proposed AutoShiftNet, and more importantly, ResNets are much more heavy than NAS searched models. | 1. What is the focus and contribution of the paper on bit-shift network architecture?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and lack of clarity in certain statements and equations?
3. Do you have any concerns about the results presented in Table 2, specifically regarding the comparison with full-precision networks?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
A methodology to automatically search for the optimal bit-shift network (Shift-Nets) architectures is proposed. The proposed method consists of three components: 1- It uses bit-shift operations when designing the search space (This idea is not new as mentioned in the paper) 2- It decouples model operations and topology search to address dominance of skip connections in NAS (This idea is also not new as mentioned in the paper) 3- It uses multiple search regularization and stabilization to address the weight sign freezing effect and stabilizing the training process (similar ideas can be found in deepshift-net paper).
Overall, the proposed method achieves better accuracy performance on various datasets while using less parameters.
Strengths And Weaknesses
Strength: The paper is well written (it is easy to follow). The general Idea is very interesting and the results are good. It also provides numerous experiments which is a plus.
Weaknesses: The paper lacks novelty. Each proposed component has already been proposed as stated in the paper. So the question is, what is exactly novel about this work? I think this is one of the most important issues of the paper. You should be able to clearly explain the novelty of your work in the paper.
There are some statements and equations that need to be justified:
What is the justification for this statement? "Besides, since bit-shift operations are extremely sensitive to a large learning rate, the selection and scheduling of the learning rate should also be carefully crafted"
What do the authors mean by "adaptive" in their proposed regularization method?
In eq. 3, what was the justification for the +/-0.5 as the threshold for the sign function?
In eq.3, the sign value, S, is ternary, however later in the paper it was said that we need "an extra bit for S". What happened to the ternary value, don't we need 2 bits?
Table 2 results are incorrect. The results of ResNet-18 and 50 should be close to the full-precision networks according to the deepshift paper (even if trained from scratch). This is very important because this could affect the validity of all results that are provided in the paper.
In the paper and in table 2, it is claimed that no multiplication was used. Could you justify how BatchNorm in resnet architectures are implemented so that it does not use multiplication? In general, are you using BatchNorm at all?
Clarity, Quality, Novelty And Reproducibility
The paper lacks novelty as explained in the Summary Of The Paper. Most of the components that are used in the paper has been previously proposed or used in other papers. The quality of the paper needs improvement. see "Strength And Weaknesses" section |
ICLR | Title
Towards Automatic Generation of Advanced Shift Networks
Abstract
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
N/A
Multiplication-free neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations. However, existing shift networks are all directly transferred from state-of-the-art convolutional neural networks (CNNs), which lead to non-negligible accuracy drop or even failure of model convergence. To combat this, we propose AutoShiftNet, the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bitshift neural networks and their real-valued counterparts. Specifically, we pioneer dragging NAS into a shift-oriented search space and endow it with the robust topology-related search strategy and custom regularization and stabilization. As a result, our AutoShiftNet breaks through the incompatibility of traditional NAS methods for bit-shift neural networks and achieves more desirable performance in terms of accuracy and convergence. Extensive experiments demonstrate that AutoShiftNet generates more advanced model architectures for shift networks, where the accuracy increases by (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, especially when many conventional CNNs fail to converge on ImageNet with bit-shift weights.
1 INTRODUCTION
In recent years, large-scale commercial applications based on convolutional neural networks (CNNs) have prompted researchers to design more efficient networks, which can be deployed on platforms with limited resource budgets, such as mobile or IoT devices. Early works utilized network quantization (Cheng et al., 2017) to achieve this goal, by replacing high-precision model parameters with smaller bit-width representations. It can reduce the computational cost of model execution, but also suffer from a non-negligible performance degradation, especially on complex datasets (e.g., ImageNet). To address this issue, recent works (Zhou et al., 2017; Elhoushi et al., 2021) turned to using binary bit shifts rather than simple quantized bits to replace floating-point model parameters.
The key insight of these solutions is that multiplying an element by a power of 2 is mathematically equivalent to a bit-shift operation on it, which is computationally much cheaper and hardware-friendly. Based on this, researchers designed different types of bit-shift techniques (Zhou et al., 2017; Elhoushi et al., 2021; Li et al., 2021; 2022), which show promising overhead reduction in model execution. However, all these solutions only focus on designing advanced weight quantization algorithms to reduce the accuracy gap between shift networks and their real-valued counterparts, where the backbone models are all directly transferred from conventional CNNs, e.g., ResNets (He et al., 2016) and VGG (Simonyan & Zisserman, 2014). Given these CNN models are all designed for the continuous real-valued domain, such direct conversion would restrict the potential of bit-shift techniques, causing less optimal network architecture with a non-trivial accuracy drop.
To overcome this limitation, we aim to design advanced shift networks from another perspective, i.e., searching for network architectures that are more compatible with the bit-shift quantization. This is inspired by the Neural Architecture Search (NAS) technique, which can automatically identify the satisfactory network architecture for a given task. The searched models have shown better performance than carefully hand-crafted models (Liu et al., 2018b; Chen et al., 2019). One straightforward way is to directly transfer NAS models searched from real-valued domains to bit-shift
networks. However, similar to the manually-crafted networks, such strategy also leads to sub-optimal results due to the semantic gap between real and bit-shift domains (Sections 3 and 5.4).
For the first time, we present AutoShiftNet, a novel methodology to automatically search for the optimal bit-shift network architectures directly, aiming to reduce the accuracy drop from the state-of-the-art real-valued models. Moreover, the introduction of bit-shift operations can significantly reduce the searching, training and inference cost, which can facilitate the deployment of large models on dedicated hardware. Specifically, AutoShiftNet contains three components: (1) Shift-oriented search space. While existing NAS techniques mainly focus on the real-valued domain, we are the first to construct a new search space composed of bit-shift operations and design the corresponding forward and backward pass. (2) Topology-related search strategy. Since shift networks tend to have faster gradient descent or even vanishing gradient (Elhoushi et al., 2021), they are more vulnerable to the conventional gradient-based NAS techniques, i.e., searched networks can be dominated by skip connections (Liu et al., 2018a). Therefore, we decouple the search of model operations and topology, which can efficiently mitigate this issue (Gu et al., 2021). (3) Search regularization and stabilization. Given the weight sign freezing effect (Li et al., 2021) and unstable training process, we adopt multiple approaches to regularize and stabilize the search procedure, including shift-adaptive L2 regularization, learning rate reset scheme and shift weight re-parameterization.
We clarify that our work is orthogonal to and different from ShiftAddNAS (You et al., 2022), which aims to search for more accurate models from a hybrid search space with four operations (Attention, Convolution, Shift and Add). Although ShiftAddNAS also considers bit-shift operations, it actually still focuses on multiplication operations as they can provide much higher prediction accuracy. The model searched by ShiftAddNAS is still dominated by multiplications while the shift operations only take a very small part (ShiftAddNAS-T1↑ contains 7.1G multiplications and 8.5G additions, but only 1.4G shifts). Such model cannot be regarded as an actual shiftnet, and is difficult to be deployed on resource-constrained mobile devices, as the number of multiply-add operations is normally restricted below 600M for an ImageNet-mobile setting (Dong & Yang, 2019). In contrast, AutoShiftNet totally removes multiplications and only considers efficient bit-shifts and additions. The searched model only contains about 300M additions, so that it is more compatible for the bit-shift domain and also more practical for real-world applications on resource-restricted edge devices.
The networks searched by AutoShiftNet show much better performance than conventional CNNs in the bit-shift domain, especially when many CNNs fail to converge on large datasets (e.g., ImageNet) with bit-shift weights. AutoShiftNet achieves an accuracy improvement of (1.69∼8.07)% on CIFAR10, (5.71∼18.09)% on CIFAR100 and ≥ 4.36% on ImageNet, with more compact parameter sizes and smaller numbers of operation computations. Compared with previous NAS methods, networks from AutoShiftNet are more compatible with the bit-shift domain, which lead to a smaller accuracy drop from the complex real-valued models. More importantly, AutoShiftNet consumes less computing resources and time as it directly searches with the bit-shift weights.
2 PRELIMINARIES
2.1 BIT-SHIFT NETWORK QUANTIZATION
Bit-shift quantization techniques (Zhou et al., 2017; Elhoushi et al., 2021) round the float-point model weights to the powers of 2, so that the intensive multiplications on weights can be achieved with cheaper binary bit shifts. Formally, given a number x and a rounded model weight 2p, their multiplication is mathematically equivalent to shifting p bits of x. Since model weights can be either positive or negative for input feature extraction, while 2p is always positive, a sign flip function flip(w, s) is thus introduced to represent the signs of weight values. This operation is achieved with a ternary sign operator s ∈ {−1, 0,+1}. Finally, we can replace the weight matrix W in the model as: W = flip(2P , S), where P is the shift matrix and S is the sign matrix. Both bit shift and sign flip are computationally cheap, as the former is the fundamental operation in modern processors and the latter just computes 2’s complement of a number. Therefore, such weight replacement can efficiently reduce the computation cost of CNN model execution.
2.2 NEURAL ARCHITECTURE SEARCH
NAS has gained great popularity in recent years, due to its capability of building machine learning pipelines with high efficiency and automation. Early methods used reinforcement learning (Zoph & Le, 2016) and evolutionary algorithms (Real et al., 2019) to search for optimal network architectures for a given task, which normally takes thousands of GPU hours. Recent works tended to use a gradientbased strategy (Liu et al., 2018b) that can reduce the search cost to a few hours. Such methods usually aim at searching for optimal cell structures, since stacking cells as a model is more efficient than searching the whole network architecture. Formally, a cell is represented as a directed cyclic graph (i.e., supernet) with N nodes {xi}Ni=1, including two inputs and one output, and several intermediate nodes. The j-th intermediate node xj connects to all previous nodes xi through the edge (i, j). The operation choice over the edge (i, j) can be relaxed as o(i,j)(x) = ∑ α (i,j) o o(i,j)(xi), where o ∈ O, andO denotes the search space of candidate operations. α(i,j)o is the trainable weight for each operation on the edge (i, j), which is normalized with the softmax function. Therefore, the feature map of node xj can be computed by adding all results from its predecessors xi: xj = ∑ o(i,j)(xi). Let Ltrain and Lval denote the model loss on the training and validation sets. A bi-level optimization is applied to the operation weight α and network weight w as:
min α Lval(w∗(α), α), s.t. w∗(α) = argmin w (Ltrain(w,α)) (1)
The final model architecture can be derived from the trained operation weight α by retaining operations with the largest weight and pruning edges with the smaller weight.
3 OVERVIEW OF AutoShiftNet
The main idea is to automatically generate well-performed bit-shift networks with high efficiency. Challenges arise when we apply exiting NAS techniques for searching bit-shift networks:
Design of shift-oriented search space. Given that existing NAS methods mainly focus on the real-valued models, their search spaces are also designed for real domain, which is not applicable to bit-shift models. Specifically, a conventional NAS search space normally consists of multiple manually defined operations, such as dilated convolutions and separable convolutions. To build the shift-oriented search space, we need to transfer these operations from the real domain into the bit-shift domain, in which the forward pass and backward pass need to be carefully designed.
Dominance of skip connections. While most of recent NAS methods adopt the gradient-based search strategy (i.e., DARTS (Liu et al., 2018b)), it has a big drawback: the searched networks are easy to be dominated by skip connections (Chen et al., 2019), as the strategy prefers the fastest way of gradient descent. Unfortunately, searching in the bit-shift domain inherits and amplifies this drawback, which would lead to the ”cell collapsing” of searched architectures. Hence, a new search strategy considering both the model operations and topology should be adopted.
Less robust search procedure. Replacing floating-point weights with bit shifts brings fast computations, but also results in the accuracy drop and difficulty of model training. Specifically, the introduced shift parameters and sign flips should be well regularized to avoid errors in the gradient descent. Besides, since bit-shift operations are extremely sensitive to a large learning rate, the selection and scheduling of the learning rate should also be carefully crafted.
We design a novel NAS technique AutoShiftNet to address the above challenges. Figure 1 shows the overview of our methodology, which consists of three key components:
• Shift-oriented search space. This new search space consists of 8 operations, which are converted from the real domain to bit-shift domain.
• Topology-related search strategy. This new strategy considers the optimal combination of model operations and topology, which can efficiently mitigate the dominance of skip connections.
• Search regularization and stabilisation. Three approaches are proposed to regularize and stabilize the search procedure: applying a shift-adaptive L2 regularization for shift parameters and resetting the learning rate during search.
4 METHODOLOGY
4.1 SHIFT-ORIENTED SEARCH SPACE
Following previous NAS works (e.g., DARTS (Liu et al., 2018b)), we adopt 8 operations as our operation search spaceO: 3×3 and 5×5 dilated convolutions, 3×3 and 5×5 separable convolutions, 3 × 3 max pooling, 3 × 3 average pooling, identity (skip) and the zero1. To construct a shiftoriented search space, we group and transfer these operations into the bit-shift domain and study the corresponding forward and backward pass computations.
Grouping candidate operations. Since not every candidate operation needs to be transferred into the bit-shift version, e.g., the identity and pooling, we first divide 8 candidate operations (excluding zero) into two groups. The first group Oc contains four convolution operations, which involve dense multiplications. The second group Ot contains the remaining operations, which mainly focus on the model topology, such as skip and pooling. The entire search space is denoted as O = {Oc,Ot}. To construct the shift-oriented search space, we just need to transfer operations in Oc into the bit-shift domain, and keep operations in Ot unchanged. Note that this operation group scheme will also be adopted in the topology-related search strategy (Section 4.2).
Replacement of operation weights. As introduced in Section 2.1, quantization of bit-shift networks can be implemented by replacing the floating-point model weights with two parameters: bit shift P and sign flip S. Hence, the weights w of operations in Oc need to be replaced with the trainable parameters (P, S), which is formulated as below:
P = round(P ), S = sign(round(S)), w = flip(2P , S) (2)
where P is the rounded shift matrix and S is the rounded sign matrix. Note that the function sign generates a ternary value, and can be represented as:
sign(s) = { −1 if s ≤ −0.5 0 if − 0.5 < s < 0.5 +1 if s ≥ 0.5
(3)
Designing forward and backward pass. Different from some previous works (Zhou et al., 2017) which just rounded the trained models into the bit-shift domain, our goal is to directly search and train the model in the shift domain. So we need to design and implement the forward and backward pass of shift operations. With the transferred weights w = flip(2P , S), the forward pass for convolutions in Oc can be represented as: Y = w ∗X = flip(2P , S) ∗X + b, where (X,Y ) denote the operation input and output, and b denotes the bias. The gradients of the backward pass can be formulated as:
∂L ∂X = ∂L ∂Y ∂Y ∂X = ∂L ∂Y wT , ∂L ∂P = ∂L ∂Y ∂Y ∂w ∂w ∂P ∂P ∂P , ∂L ∂S = ∂L ∂Y ∂Y ∂w ∂w ∂S ∂S ∂S , ∂L ∂b = ∂L ∂Y
(4)
where L denotes the model loss. We use the straight through estimators (Yin et al., 2019) to compute the derivatives of the round and sign functions as: ∂round(x)∂x ≈ 1 and ∂sign(x) ∂x ≈ 1. For the sign flip function, we have: ∂flip(x,s) ∂x ≈ flip(x, s) and ∂flip(x,s) ∂s ≈ 1. With these estimations, we can set ∂P ∂P ≈ 1 and ∂S ∂S ≈ 1,
1Zero means no connection between two nodes.
and then obtain the following expressions:
∂w ∂S =
∂flip(2P , S)
∂S ≈ 1
∂w ∂P =
∂flip(2P , S)
∂P =
∂flip(2P , S) ∂2P ∂2P ∂P
≈ flip(2P , S)2P ln2 = w2P ln2
(5)
As a result, the gradients of the trainable parameters (P, S) with respect to the model loss L are: ∂L ∂P ≈ ∂L ∂Y ∂Y ∂w w2P ln2, ∂L ∂S ≈ ∂L ∂Y ∂Y ∂w (6) Based on the above constructed forward and backward pass of bit-shift operations, we can achieve searching and training a NAS model in the bit-shift domain.
4.2 TOPOLOGY-RELATED SEARCH STRATEGY
The dominance of skip connections caused by the gradient-based search strategy is a major restriction for applying NAS techniques to quantized networks (Bulat et al., 2020). Besides, ignoring the model topology during a search in some NAS methods also limits the generation of optimal network architectures. Hence, we determine to decouple the operation search and topology search. This search strategy can efficiently suppress the dominance of skip-connections and also improve the performance of searched networks.
Operation search. As introduced in Section 4.1, the 8 candidate operations in the shift-oriented search space can be divided into two groups: Ot contains topology-related operations that can explicitly affect the model topology (e.g., skip), while operations in Oc do not have such impact. Therefore, the operation search spaceO is split into two subspacesO = {Ot,Oc}, and each operation subspace is relaxed to be continuous independently. Then a bi-level optimization is applied to train the model weight w and operation weight α. With the trained α, we retain the operation with the maximum weight in each operation subspace, which can be formulated as:
o (i,j) t = arg max ot∈Ot α(i,j)ot , o (i,j) c = arg max oc∈Oc α(i,j)oc (7)
Such group operation scheme can avoid the elimination of potential topology choices during the operation search, which then allows the subsequent topology search to find out the optimal topology. Finally, all the retained operations are collected to construct a new operation search space ON = {o(i,j)t , o (i,j) c } on each edge (i, j), which is used for the topology search.
Topology search. The previous operation search step aims to determine the best operations on each edge. In this topology search step, we try to search for the optimal combinations of model edges. It can well prevent skips from dominating the searched model topology.
First, a topology search space is constructed. Following previous works, we restrict two input edges for each node in the cell supernet, so the topology search space Exj for node xj can be represented as a set of all possible pairwise combinations of its incoming edges: Exj = {⟨(i1, j), (i2, j)⟩|0 < i1 < i2 < j}. The topology search space contains C2n = n!2!(n−2)! candidates, where n denotes the number of incoming edges for node xj . Similar to the operation search, we also relax the topology search space Exj to be continuous:
βcxj = exp(β′ c xj/Tβ)∑
c′∈Exj exp(β′c
′ xj/Tβ)) (8)
where βcxj is the topology weight that denotes the normalized probability of the edge combination c ∈ Exj . Tβ(t) = T0θt is the temperature for architecture annealing, which can efficiently bridge the optimization gap between the supernet and child networks (Xie et al., 2018).
Then, the importance weight γ(i,j) for each edge (i, j) can be computed from those combinations containing this edge, which can be formulated as:
γ(i,j) = ∑
c∈Exj ,(i,j)∈c
1
N(c) βcxj (9)
where N(c) is the number of edges in the edge combination c. As a result, the feature map of node xj can be obtained by summing all the incoming edges weighted by the edge importance weight γ(i,j):
xj = ∑ i<j γ(i,j)o(i,j)(xi) (10)
where o(i,j)(xi) denotes the mixed operations on edge (i, j) obtained from the operation search. In the topology search, as the number of candidate operations is largely reduced (i.e., 2 in ON ), we can directly use the one-level optimization to update three weights (w,α, β) in the search.
Determining the architecture. After the operation and topology search, we select the edge combination c with the maximal weight in topology weight β to construct the model topology, and then attach to each edge the operation with the maximal weight in the operation weight α.
4.3 SEARCH REGULARIZATION AND STABILISATION
Based on the shift-oriented search space and topology-related search strategy, an efficient bit-shift network architecture can be identified for each specific task automatically. However, the adoption of bit-shift weights makes the architecture search much more unstable and also leads to more difficult model training. The search process usually converges to a sub-optimal solution, sometimes even cannot converge. So we need to regularize and stabilize the optimization of the three trainable weights during search: network weight w, operation weight α and topology weight β.
For the optimization of the network weight w, note that w consists of the bitwise shift P and sign flip S, i.e., w ← {P, S}. We use an adaptive L2 regularization term to regularize the gradient descent of P , which is defined as ∑ W 2 = ∑ (2PS)2 rather than
the conventional formulation ∑
(P 2 + S2). While most weights in a trained model are rarely larger than
1 (i.e., |2P | < 1), the range of the value of P is also empirically set to be smaller than 0. As a negative parameter, a smaller P instead leads to a larger P 2, which gives a reverse activation to the training loss. Hence, the regularization term should be modified to avoid misguiding the direction of the gradient descent. Formally, the regularized loss L′ can be formulated as: L′ = L + λ2 ∑ (2PS)2, where L denotes the original model loss and λ is the fixed weight decay. Our experiments in Section 5.4 show that this adaptive L2 regularization improves the accuracy of searched architectures.
To stabilize the optimization of the operation weight α and topology weight β, in addition to using the temperature regularization in Eq.(8), we also carefully implement a learning rate reset scheme. Since bit-shift networks are extremely sensitive to large learning rates, we need to use a much smaller initial learning rate than that in previous NAS techniques to avoid model convergence failure. Besides, while previous works (Gu et al., 2021) adopt the annealed learning rate from the previous operation search step for following topology search, we find that resetting the learning rate to an initial value again at the start of topology search allows to obtain a better network architecture. Figure 2 shows the learning rate curve in the search with the cosine annealing: the learning rate is reset at the 30th epoch, when the topology search starts.
5 EVALUATION
We implement AutoShiftNet with Pytorch. Following previous works (Elhoushi et al., 2021; Zhou et al., 2017), we emulate the precision of an actual bit-shift hardware implementation by rounding the operation input and bias to the 32-bit fixed-point format precision (16-bit for the integer part and 16-bit for the fraction part). The shift parameter P is constrained in [-15, 0], i.e., the absolute value of the model weight is within [2−15, 1], which only needs 4 bits to represent. The model weight also needs an extra bit to denote its sign S.
We run evaluations on CIFAR10, CIFAR100 and ImageNet datasets. We comprehensively compare AutoShiftNet with a variety of state-of-the-art CNN models (e.g., ResNet, VGG, MobileNet, ShuffleNet, GoogleNet, SqueezeNet) and NAS models (e.g., NASNet, AmoebaNet, DARTS, GDAS, DOTS). For fair comparisons, these baseline models are trained in the bit-shift domain, unless otherwise specified.
5.1 EVALUATION ON CIFAR
Search settings. The entire search process on CIFAR 10/100 consists of two steps: operation search for 30 epochs and then topology search for 40 epochs. The network skeleton consists of 8 cells (6 normal cells and 2 reduction cells) with the initial channel size of 16. The learning rate is scheduled from 0.01 following the reset scheme in Section 4.3. The search process takes about 5.5 hours on one GeForce RTX 3090 GPU. However, since we emulate the hardware bit-shift operations with software implementation, the search time actually can be significantly shortened on the dedicated hardware platforms. We will discuss more about the search efficiency in Section 5.5. The best cells searched from CIFAR are shown in Appendix C.
Evaluation settings. The evaluation network is composed of 20 cells, including 18 normal cells and 2 reduction cells. We set the initial channel size as 36 and optimize the network via the RAdam optimizer (Liu et al., 2019) with an initial learning rate of 0.01 (cosine annealing to 0) and weight decay of 3e-4. Following the setting in DeepShift, the netowrk is trained from scratch with bit-shift weights for 200 epochs. The batch size is set as 128. Cutout and drop-path with a rate of 0.2 are used to prevent overfitting. The training accuracy curves can be found in Appendix D.
Results analysis. Table 1 shows the evaluation results on CIFAR 10/100 datasets. The bit-shift networks searched by AutoShiftNet achieve 95.58% and 76.35% accuracy on CIFAR10 and CIFAR100, respectively. Compared to conventional manually designed CNNs, AutoShiftNet models lead to a significant performance improvement in the bit-shift domain, where the prediction accuracy increases (1.69∼8.07)% on CIFAR10 and (5.71∼18.09)% on CIFAR100. Moreover, the parameter size of searched networks is also much smaller than most conventional CNNs. Hence, in contrast to directly transferring those CNNs into bit-shift counterparts, AutoShiftNet is a more efficient approach to generate high-quality bit-shift networks, with the improved accuracy, reduced parameter size and automatic design process. We also compare AutoShiftNet with state-of-the-art NAS techniques searched in the real domain, and the results show that our method can find out architectures more compatible to the bit-shift domain. We will discuss more details in Section 5.3.
5.2 EVALUATION ON IMAGENET
Evaluation settings. Following previous works (Liu et al., 2018b; Dong & Yang, 2019), we construct the network for ImageNet with the best cells searched from the CIFAR dataset. The evaluation follows the ImageNet-mobile setting, in which the input size is 224×224. The network consists of 14 cells (12 normal cells and 2 reduction cells) with the initial channel size of 46. We train the network in the bit-shift domain for 90 epochs with a batch size of 1024. The RAdam optimizer with an initial learning rate of 0.01 (warming up in the first 5 epochs and cosine annealing to 0) is used. The training accuracy curves can be found in Appendix D.
Results analysis. Table 2 shows the evaluation results on the ImageNet dataset. It can be found that although some conventional CNNs (e.g., ResNet) still perform well when converted to the
Architecture Acc. (%) Params Multi Add Top-1 Top-5 (M) (M) (M) ResNet18 62.25 83.79 11.7 0 987 ResNet50 69.04 88.61 25.8 0 2053 VGG16* 0.10 0.98 138.5 0 8241 GoogleNet 62.81 84.81 6.6 0 752 MobileNet-v2* 40.03 65.13 4.7 0 206 ShuffleNet-v2* 37.32 62.26 7.4 0 306 SqueezeNet1 0 29.08 51.96 3.8 0 412 NASNet 66.24 86.24 5.6 0 317 DARTS-v2 64.98 85.18 4.7 0 287 GDAS 65.87 85.95 5.3 0 291 DOTS 66.36 86.23 5.2 0 302 AutoShiftNet (Ours) 67.17 87.38 5.1 0 298
Table 2: Evaluation results on ImageNet. *: The results are the highest accuracy in the training while networks fail to converge.
Architecture Domain Acc. (%) C10 Diff. C100 Diff. ResNet18 R 94.45 - 72.53 -BS 93.20 -1.25 69.11 -3.42 ResNet50 R 95.12 - 74.19 -BS 93.89 -1.23 70.65 -3.54 DARTS(v2) R 96.48 - 78.78 -BS 94.80 -1.68 75.17 -3.61 DARTS- R 95.61 - 76.02 -BS 93.87 -1.74 70.85 -5.17 DOTS R 96.55 - 78.87 -BS 95.13 -1.42 75.05 -3.82 AutoShiftNet R 96.19 - 78.26 -
BS 95.58 -0.61 76.35 -1.91
Table 3: Accuracy of various architectures on CIFAR10 (C10) and CIFAR100 (C100) in the real (R) and bit-shift (BS) domains.
bit-shift domain, there are many more state-of-the-art CNNs giving much lower prediction accuracy or even failing to converge, including VGG16, MobileNet-v2 and ShuffleNet-v2, whose final top-1 accuracy drops to 0.09%, 1.18% and 9.27%, respectively. In contrast, AutoShiftNet can converge robustly and achieve 67.17% top-1 accuracy, which is (4.36∼67.07)% higher than conventional CNNs except ResNet50. Note that the high accuracy of ResNet50 is obtained at the price of much larger parameter size (5×) and more operations (7×). Hence, compared to conventional CNNs, bit-shift networks searched by AutoShiftNet perform better with fewer parameters and operations. The comparison with previous NAS techniques also shows that AutoShiftNet can generate more compatible architectures for bit-shift networks. Given all multiplications in networks are replaced with bit shifts, the number of multi-operations would be 0, which greatly reduces the resource cost and speeds up the model inference.
5.3 REAL-VALUED AND BIT-SHIFT NETWORK COMPARISONS
We compare the accuracy of the same network trained in the real and bit-shift domains, aiming to investigate the accuracy drop of conventional CNNs and NAS models caused by the bit-shift quantization. Table 3 shows the results of some representative networks on the CIFAR datasets. Comparison on ImageNet can be found in Appendix E. We can observe that AutoShiftNet not only achieves the highest accuracy of bit-shift networks, but also leads to the smallest accuracy drop (-0.61% and -1.91%) when the network is quantized from the real to bit-shift domains. In comparison, conventional CNNs have lower accuracy in the real domain, and the accuracy drops more significantly during the bit-shift quantization.
We further compare AutoShiftNet with previous NAS techniques. From Table 3, AutoShiftNet is able to obtain network architectures with better performance in the bit-shift domain, even their accuracy in the real domain is slightly lower. It indicates that transferring existing NAS models directly to the corresponding bit-shift networks normally just achieves sub-optimal solutions. The networks searched by AutoShiftNet are more compatible to the bit-shift quantization.
5.4 ABLATION STUDY
Impact of the shift-oriented search space. The superiority of AutoShiftNet in the bit-shift domain actually has indicated the effectiveness of the shift-oriented search space, which avoids converging to sub-optimal solutions for searching bit-shift network architectures. To further validate the importance of this new search space, we replace the search space with the classical real-valued one in AutoShiftNet, and then check the performance of the searched results. Four experiments are run individually with random seeds, where the searched architectures achieve average accuracy of 94.97% on CIFAR10 and 75.03% on CIFAR100. It drops 0.63% and 1.32% from that with the shift-oriented search space. Besides, as a by-product, the shift-oriented search space significantly reduces the resource cost in the search process, as it replaces dense multiplications with much cheaper bit shifts. Hence, AutoShiftNet can generate better bit-shift networks automatically with much less resource budget.
Impact of the topology-related search strategy. We take DARTS as the baseline strategy to derive cell structures from the shift-oriented search space. The result is shown in Figure 3a. It can be seen that the searched cell is dominated by the skip connections and only achieves 69.58% accuracy on
CIFAR100. This is because the drawback of the traditional gradient-based search strategy is amplified in the bit-shift domain. By integrating our topology-related search strategy, this drawback can be effectively mitigated and the searched result is shown in Figure 3b. Since the edge connections are further inspected, the topology-related search strategy can generate more stable architectures and achieve 76.21% accuracy, which is 6.63% improvement over DARTS.
Impact of regularization and stabilization. To evaluate the effectiveness of our modified L2 regularization (L2R) and learning rate reset (LRR) schemes, we compare the performance of networks searched with various scheme combinations (Table 4). We find that while both schemes increase the accuracy of the searched architecture, LRR contributes more than L2R. Figure 4 shows the accuracy curves of the search process on CIFAR10 with or without
LRR. It shows that LRR scheme significantly improves the model accuracy from 74.58% to 84.68%, which makes it more possible to search for better bit-shift networks. Note that at the start of topology search (the 30th epoch), the model gets pruned and retrained, so the accuracy has a sharp drop.
5.5 EFFICIENCY ANALYSIS
Given that modern computer architectures use the binary format to store and calculate data, bitwise operations like bit shift and addition are the atomic units for performing complex computations, including the multiplication. According to (Agner Fog), the floating-point multiplication takes at least 5× of clock cycles than the bit shift. Besides, compared to the hardware implementation of bit shift on the circuit, the multiplier takes at least 9.7× of average power, 1.45× of area and 4.32× of transistors (Asati, 2009). Hence, by replacing floating-point weights with bit shift and sign flip operations, the efficiency of architecture search can be significantly improved over previous NAS techniques that search in the real domain. While our software emulation of AutoShiftNet just takes 5.5 hours, where the bit shift is simulated by multiplying the power of 2, the actual search cost on the dedicated hardware platforms (e.g., FPGA accelerators) would be largely decreased. We deem that accelerating the NAS process with bit shift on the FPGA board is a promising research direction. Besides, since the searched architectures are trained as bit-shift networks, it also reduces the resource cost and time of model training and inference. AutoShiftNet also greatly compresses the storage size of searched networks, as it represents model weights with fewer bits (i.e., 5 bits). This promotes the applications of NAS models on the edge devices, where the memory storage and energy consumption are the main constraints.
6 CONCLUSION AND FUTURE WORK
In this paper, we propose to automatically generate advanced bit-shift networks with a dedicated NAS method AutoShiftNet. We overcome the challenges of applying existing NAS techniques in the bit-shift domain with three innovations: shift-oriented search space, topology-related search strategy and search regularization and stabilization. Experimental results show that AutoShiftNet can search for architectures with higher compatibility for bit-shift operations, and better performance than state-of-the-art CNNs and NAS models.
While replacing model multiplications with bit shifts can efficiently reduce the running cost, it is essentially a coarse-grained representation of model weights, which naturally results in the non-trivial drop of prediction accuracy. To address this, we can further introduce additions into the search space of AutoShiftNet, which are also efficient substitutes of multiplications (Chen et al., 2020) and more importantly, can achieve finer-grained weight manipulation (You et al., 2020). Since current CUDA kernels lack optimization of intensive additions, we leave it as future work.
A ARCHITECTURE SEARCH DETAILS
For the operation search, the official CIFAR training dataset is divided into two halves: training set DT and validation set DV , which are used to optimize network weights w and operation weights α, respectively. The topology search directly uses the whole official training set to optimize the topology weight β with one-level optimization, where the initial temperature T0 is set as 10 and decay to 0.02. We adopt Rectified Adam (RAdam) optimizer with initial learning rate of 0.01 and weight decay of 3e-4 to optimize model weight w and Adam optimizer with initial learning rate of 3e-4 and weight decay of 1e-3 to optimize operation weight α and topology weight β. The learning rate is scheduled with cosine scheduler following our proposed learning rate reset scheme. The search process consists of 70 epochs with the batch size of 128, including 30 epochs for operation search and 40 epochs for topology search.
B ARCHITECTURE EVALUATION DETAILS
Training on CIFAR. We train the evaluation network for 200 epochs with the batch size of 128. The network is optimized by RAdam optimizer with initial learning rate of 0.01 and weight decay of 3e-4. The learning rate is scheduled by a cosine annealing scheduler to 0. Cutout and drop-path with a rate of 0.2 are used for preventing overfitting.
Training on ImageNet. The network is trained by 90 epochs with the batch size of 1024. The RAdam optimizer is adopted, whose initial learning rate is set as 0.01 and weight decay is set as 3e-4. The learning rate is cosine annealed to 0. Label smoothing and an auxiliary loss tower is used to enhance model training.
C BEST SEARCHED CELL STRUCTURES
Table 5 and 6 show the best searched architectures for CIFAR10 and CIFAR100. The evaluation on ImageNet adopts cells searched from CIFAR10 (Table 5).
D TRAINING RESULTS
Figure 5 shows the accuracy traces of training on CIFAR10 and CIFAR100. Figure 6 shows the accuracy traces of training on ImageNet, where (a) takes batch size of 1024 and (b) takes 256. It can be seen that training with batch size of 256 converges earlier and is also more stable, where the final top-1 accuracy is slightly higher (68.67% vs. 67.17%).
E COMPARISON WITH REAL-VALUED COUNTERPARTS ON IMAGENET
Due to the limitation of resource and time, we just select each a model from conventional CNNs (i.e., ResNet18) and previous NAS methods (i.e., DOTS) to compare the accuracy drop from the real-valued counterparts on the ImageNet with our proposed AutoShiftNet. Table 7 shows the results. It can be found that the architecture searched by AutoShiftNet achieves the highest accuracy as a bit-shift network, and also has the lowest accuracy drop from the counterpart training in the real domain. Compared to other conventional CNNs and even most state-of-the-art NAS models, ResNet have more robust performance even training with bit-shift weights. However, it is still worse than our proposed AutoShiftNet, and more importantly, ResNets are much more heavy than NAS searched models. | 1. What is the focus of the paper regarding NAS techniques, and what are the proposed strategies?
2. What are the strengths and weaknesses of the paper, particularly in terms of novelty and energy efficiency?
3. Do you have concerns regarding the comparison between the proposed method and pure fixed-point based networks?
4. How does the reviewer assess the quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the evaluation and ablation studies presented in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a new NAS technique that is essentially a marriage between NAS and bit-shift networks. The authors conducted a number of experiments in the computer vision domain and claimed that the proposed method generally gives better architectures for bit-shift networks.
Strengths And Weaknesses
Strengths
This paper is fairly easy to understand.
The author provided a set of ablation studies on the effectiveness of the proposed strategies, this includes
L2 regularization
Learning rate scheme
Separation of search spaces
Weakness
My major concern is that the proposed bit-shift networks do not sound attractive compared to pure fixed-point based networks if one thinks carefully.
Although the multiplication seems to be cheap, the post-multiplication result would have to be accumulated. The accumulation might also need to happen at a very large bit width. It is not clear to me, without a detailed discussion on this, how the proposed bit-shift network is energy efficient.
You cannot simply say it has less multiplications. You will need to have claims like ‘it reduce X number of N-bit multiplications, but now has Y number of M-bit additions instead of Z number of P-bit additions.’ What are these multiplications and additions? Are you considering all of them to be floating-point? Then how would you compare to a pure fixed-point implementation? Eg. both additions and multiplications are fixed-point.
Another concern is on the novelty of this paper. Although this paper claims to make contributions such as ‘topology-related search strategy’ and ‘custom regularization and stabilization’, most of them are fairly standard techniques.
The topology and operation search is nothing more than breaking down the original search space. There are a great number of NAS literature looking at macro and micro search spaces [1, 2], and these are almost identical to what was mentioned in this paper.
The regularization re-formulation looks fairly standard to me.
Annealing learning rate is also an existing technique.
The evaluation is limited to vision benchmarks and CNNs, and the ImageNet results are fairly low compared to SOTA unquantised NAS such as iDARTs.
[1] AGNAS: Attention-Guided Micro- and Macro-Architecture Search
[2] Probabilistic dual network architecture search on graphs
Clarity, Quality, Novelty And Reproducibility
Quality and Novelty
As I have mentioned in the weakness section, I do not think there is enough novelty in this paper. In the meantime, I do not think the core message in this paper makes too much sense without a solid comparison to a pure fixed-point quantized network. The major claim in this paper is that shift networks are better because they can replace the expensive multiplications, so we should tolerate a certain accuracy loss and use them for better run-time efficiency. If this is the core message, then the author should compare at least to the standard quantisation (fixed-point) to persuade me that shift quantization is more attractive.
The authors do have an ‘efficiency analysis’ section, but they mainly focused on comparing against float-point multiplication which is surely a strongman baseline. I am also confused by the following statement made by the authors:
‘We deem that accelerating the NAS process with bit shift on the FPGA board is a promising research direction.’
I do not really understand this claim. FPGA synthesis is notoriously time-consuming, how is this a more attractive approach? Surely you would have to re-configure your network to the FPGA device at every evaluation step?
Reproducibility and Clarity
With the ablation studies and clarifications in the Appendix, I think this paper is reproducible. Most of the descriptions of the method is clear. However, I do think the evaluation on the efficiency side is very theoretical and only considers an unfair comparison to floating-point. Counting only the number of multiplications and additions is not really giving any hints about the actual run-time cost. |
ICLR | Title
The Variational Walkback Algorithm
Abstract
A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the “right” direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.
1 INTRODUCTION
Although earlier research focused on generating data through Monte Carlo Markov chains (MCMCs), e.g. with various Boltzmann machines (Salakhutdinov & Hinton, 2009), most of the recent effort in designing deep generative models is based on single-step generation, e.g., with variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014). However, generating a sample by going through a series of stochastic transformations that gradually improve the generated sample (or its latent representation) to make it more plausible could hold some advantages. A generative process can be seen as a mapping from simple noise variates (e.g., uniform, Gaussian) to samples from a very complicated distribution (maybe concentrated near a low-dimensional manifold) approximating the one which we are trying to learn from. If the data distribution is complex (e.g., the corresponding manifold is highly convoluted and non-linear), the generative process may involve a highly non-linear transformation which could be difficult to learn and optimize. Such highly non-linear transformations are probably best represented (and learned) by composing a large number of slightly non-linear transformations, either with a fixed-depth deep network, or with a variable depth recurrent computation, which is what the repeated application of a transition operator corresponds to.
1.1 MOTIVATIONS
The main motivation for the paper are the following.
• The main difference between feedforward generation and recurrent generation is twofold:(1) in the recurrent case, the same parameters are used for each step of the transition
∗[email protected] †[email protected] ‡[email protected] §CIFAR Senior Fellow
operator, and (2) by providing an interpretation of each of these steps as the application of a transition operator, we can design training procedures which do not require backpropagating through all the steps of the unfolded computation (from the raw noise samples to the generated output). This is a potential that clearly deserves to be explored further and motivates the learning framework introduced here.
• Another motivation for the Variational Walkback is the idea that we only need to carve the energy function in the right direction at each point in the space of the random variables of interest, which may sideskip the need to actually sample from the stationary distribution of a Markov chain in order to obtain the gradients of the training objective. The intuition is that if the model’s transition operator wants to move away from the data and into an area without data, this is a clue that the energy gradient is pointing in the wrong direction at that place. Consider a chain of samples following the model’s transition operator (or variants of it at different temperatures), starting at a data point. If the chain moves us away from data points, then we can use the previous state in the chain as a target for the operator when that operator is applied to the next next state, i.e., we want to teach the operator to walk back towards the data. This intuition was already exploited by Bengio et al. (2013c) but without a firm mathematical grounding. In Variational Walkback this is rigorously justified by a variational bound.
• Yet another motivation for the particular approach presented here is that it innovates in the rarely explored direction of parametrizing directly the generative model via a transition operator, rather than via an explicit probability function or energy function. This idea has already been discussed in the context Generative Stochastic Networks (GSNs) (Bengio et al., 2013b), a generalization of denoising auto-encoders (DAEs) (Vincent et al., 2008) which interprets the auto-encoder as estimating the gradient of an energy function (Alain & Bengio, 2014) or as a transition operator (Bengio et al., 2013c). An advantage of being able to parametrize directly the generator is seen with GANs and DAEs: we directly parametrize and learn the function which will be used to perform the task of interest (e.g. generating answers to some questions). Instead, the traditional approach is to parametrize a probability function or energy function (e.g., with a Boltzmann machine) and then then use another procedure (the MCMC method of your choice) to sample from it and do inference. Another important reason for exploring algorithms for directly learning a transition operator is that they put less constraint on the form of the transition operator, compared with a transition operator derived from an energy function. More specifically, neural net implementations of transition operators derived from an MCMC typically require the presence of symmetric weights (due to the symmetry of the second derivative of the energy with respect to a pair of units in the neural network), as discussed by Bengio et al. (2015). When we consider a biologically plausible implementation of these learning algorithms, the weight symmetry constraint (Wij = Wji) is not reasonable as a hard constraint. Instead, if the transition operator (rather than the energy function) is the object being parametrized and learned, then there is no such hard constraint.
1.2 GENERAL THEORY
We introduce a novel variational bound which is an alternative to and improves upon the traditional reconstruction error as a training objective for DAEs and GSNs. Similar variational bounds have been used for VAEs as well as for the non-equilibrium thermodynamics generative models (SohlDickstein et al., 2015). A distribution P over a chain of samples is defined, which corresponds to iteratively applying transition operators with shared parameters, starting from a pure noise initial state. We would like this process to produce training examples. An inverting flow Q is defined starting from a training example (the “walk-away” trajectory), and following the transition operator of the model, i.e., estimating the posterior distribution of the generative chain produced by P , given that it were landing at a training example. If the model does not match the data distribution, that chain Q will tend to walk away from the training samples, and we want to inhibit that by training P to “walk back”. Instead of using a completely different parametrization for the variational approximation of the posterior (theQ distribution), like in VAEs and non-equilibrium dynamics, we propose to exploit the decomposition of P as a series of stochastic transformations in order to parametrize Q with the same parameters as P , with the step-wise estimated posterior matching the correct one (from P ) for all but the last step of the walk-away trajectory. To make the approximation in the
last step of the chain of walk-away steps better (and thus the variational bound tighter) we introduce the idea of gradually increasing temperature at each step of the walk-away Q chain of transitions (or gradually reducing temperature, at each step of the corresponding walkback trajectory under P ). This also has the advantage that the training procedure will more easily converge to and eliminate spurious modes (those modes of the model where there is no nearby training data). This is because the walk-away Q chain will be making large steps towards the dominant and most attractive modes when the temperature becomes large enough. Unless those modes are near data points, the walkback algorithm will thus “seek and destroy” these modes, these spurious modes.
We present a series of experimental results on several datasets illustrating the soundness of the proposed approach on the MNIST, CIFAR-10 and CelebA datasets.
2 MIXING-FREE TRAINING FRAMEWORK BASED ON THE WALKBACK IDEA
2.1 MAXIMUM LIKELIHOOD TRAINING OF UNDIRECTED GRAPHICAL MODELS
Let v denote the vector of visible units and h denote the vector of hidden random variables, with the full state of the model being s = (v,h). Let pθ denote the model distribution, with joint energy function Eθ and parameter vector θ:
pθ(s) := e−Eθ(s)
Zθ , (1)
where Zθ is the partition function
Zθ :=
∫ e−Eθ(s)ds. (2)
Let pD be the training distribution, from which a sample D is typically drawn to obtain the training set. The maximum likelihood parameter gradient is
Ev∼pD [ −∂ log pθ(v)
∂θ
] = Ev∼pD,h∼pθ(h|v) [ ∂Eθ(v,h)
∂θ
] − Es∼pθ(s) [ ∂Eθ(s)
∂θ
] (3)
which is zero when training has converged, with expected energy gradients in the positive phase (under pD(v)pθ(h|v)) matching those under the negative phase (under pθ(s)). Note that in the (common) case of a log-linear model, the energy gradient (with respect to parameters) corresponds to the sufficient statistics of the model. Training thus consists in matching the shape of two distributions, as captured by the sufficient statistics: the positive phase distribution (influenced by the data, via the visible) and the negative phase distribution (where the model is free-running and generating configurations by itself).
2.2 MIXING-FREE TRAINING FRAMEWORK FOR UNDIRECTED GRAPHICAL MODELS
The basic idea of the proposed mixing-free training framework for undirected graphical models is the following. Instead of trying to match the whole positive phase and negative phase distributions (each of which require a difficult sampling operation, generally with an MCMC that may take very long time to mix between well separated modes), we propose to only match the shape of the energy function locally, around well-chosen points st. Another way to think about this is that instead of trying to directly maximize the likelihood of pθ which requires expensive inference (ideally an MCMC) in the inner loop of training (for each example v ∼ pD), we would like to learn a transition operator pT (st+1|st) such that following it at temperature T = 1 would gradually move the state st towards the data generating distribution.
For this purpose, we propose to use a walkback strategy similar to the one introduced by Bengio et al. (2013c), illustrated in Algorithm 1. The idea is to start from a configuration of s which is compatible with the observed data x, let the state evolve according to our transition operator, and then punish it for these moves, making it more likely to make backwards transitions on this trajectory. If learning was completed, the only moves that would remain are those between highly probable configurations under the data generating distribution. The other ones would be “punished”,
like a child walking away from its designated task and forced to walk back (towards the data)1. Following the model’s inclination in order to generate this random trajectory is more efficient than simply adding noise (like in the denoising auto-encoder (Vincent et al., 2008) or the non-equilibrium dynamics (Sohl-Dickstein et al., 2015) algorithms) because it makes the learning procedure focus its computation on state configurations corresponding to spurious modes to be eliminated. To make sure these spurious modes are approached efficiently, the proposed algorithm also includes the idea of gradually increasing temperature (i.e., the amount of noise) along this walk-away trajectory. At high temperature, the transition operator mixes very easily and quickly reaches the areas corresponding to large spurious modes.
Interestingly, all this comes out naturally of the variational bound presented below, rather than as something imposed in addition to the training objective.
Algorithm 1 VariationalWalkback(θ) Train a generative model associated with a transition operator pT (s|s′) at temperature T (temperature 1 for sampling from the actual model). This transition operator injects noise of variance Tσ2 at each step, where σ2 is the noise level at temperature 1. Require: Transition operator pT (s|s′) from which one can both sample and compute the gradient
of log pT (s|s′) with respect to parameters θ, given s and s′. Require: Precomputed σ2data, the overall variance (or squared diameter) of the data.
repeat Tmax ← σ 2 data
σ2
K ← log2 Tmax Sample x ∼ data (or equivalently sample a minibatch to parallelize computation and process each element of the minibatch independently) Let s0 = (x) and initial temperature T = 1, initialize L = 0 for t = 1 to K do
Sample st ∼ pT (s|st−1) Increment L ← L+ log pT (st−1|st) Update parameters with log likelihood gradient ∂ log pT (st−1|st)∂θ Increase temperature with T ← 2T
end for Increment L ← L+ log p∗(sK)
until convergence (monitoring L on a validation set and doing early stopping)
3 VARIATIONAL LOWER BOUND ON THE LOG-LIKELIHOOD
Let us first consider a way in which our model could approximately generate samples according to our model and the associated transition operator pT (s|s′). That process would start by sampling a state sK inside a volume that contains all the data, e.g., with a broad Gaussian p∗(sK) whose variances are set according to the training data. Then we would sample sK−1 from pTmax(s|s′ = sK), where Tmax is a high enough temperature so that the noise dominates the signal and is strong enough to move the state across the whole domain of the data on the visible portion of the state. If σ2data is the maximum variance of the data (corresponding to the visible dimensions of the state) and σ2 is the amount noise injected by the transition operator on the visible units at temperature 1, then we could pick
Tmax = σ2data σ2
(4)
to achieve that goal. From that point on we are going to continue sampling the “previous” state st according to pT (s|s′ = st+1) while gradually cooling the temperature, e.g. by dividing it by 2 after each step. In that case we would need
K = log2 Tmax (5)
1This analogy with a child was first used in talks by Geoff Hinton when discussing constrastive divergence (personal communication)
steps to reach a temperature of 1. Finally, we would look at the visible portion of s0 to obtain the sampled x. In practice, we would expect that a slower annealing schedule would yield samples more in agreement with the stationary distribution of p1(s|s′), but we explored this aggressive annealing schedule in order to obtain faster training.
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)ds K 1 (6)where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
log p(x) = log ∫ sK1 qT0(x)qT1(s1|s0(x, )) ( K∏ t=2 qTt(st|st−1) ) pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 (7)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L
= ∫ sK1
qT0(x)qT1(s1|s0 = x) ( K∏ t=2 qTt(st|st−1) )
log pT0(s0 = x|s1)
(∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0xqT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 . (8)
This bound is valid for any q but will be tight when q(sK , sK−1, . . . , s1|s0) = p(sK , sK−1, . . . , s1|s0), and otherwise can be used to obtain a variational training objective. Note that both q and p can be decomposed as a product of one-step conditionals. Here, we can make most of the qTt transition probabilities match their corresponding pTt transition probabilities exactly, i.e., for 1 ≤ t < K we use qTt(s|s′) = pTt(s|s′). (9) The only approximations will be on both ends of the sequence:
• Sampling exactly from the model’s p(v = x) is typically not feasible exactly (it involves the usual posterior inference, e.g., as used in VAEs) but as explained below we will exploit properties of the algorithm to approximate this efficiently. We call the chosen approximation q1(v).
• At the last step, the optimal qTK (sK |sK−1) is not simply the model’s transition operator at temperature TK , because this conditional also involves the marginal “starting distribution” p∗(sK). However, because we have picked TK large enough to make samples from qTmax(sK |sK−1) dominated by noise of the same variance as that of p∗, we expect the approximation to be good too.
3.1 ESTIMATING THE LOG-LIKELIHOOD USING IMPORTANCE SAMPLING
In practice we cannot compute L exactly (nor its gradient), but we can easily obtain an unbiased estimator of L (or of its gradient) by sampling sK1 from the q distributions, i.e., approximate the L integral by a single Monte-Carlo sample. This is what is done by the training procedure outlined in Algorithm 1, which thus performs stochastic gradient ascent on the variational boundL, and this will
tend to also push up the log-likelihood log p(x) of training examples x. Note that such variational bounds have been used successfully in many learning algorithms in the past (Kingma & Welling, 2013; Lamb et al., 2016).
We derive an estimate of the negative log-likelihood by the following procedure. For each training example x, we sample a large number of diffusion paths. We then use the following formulation to estimate the negative log-likelihood.
log p(x) = logEx∼pD,qT0 (x)qT1 (s1|s0(x,))( ∏K t=2 qTt (st|st−1))pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) (10)
4 TRANSITION OPERATORS FOR VARIATIONAL WALKBACK
Up to now we have not specified what the form of the transition operators should be. Two main variants are possible here. Either we directly parametrize the transition operator, like with denoising auto-encoders or generative stochastic networks, or we obtain our transition operator implicitly from some energy function, for example by applying some form of Gibbs sampling or Langevin MCMC to derive a transition operator associated with the energy function.
An advantage of the direct parametrization is that it eliminates the constraint to have symmetric weights, which is interesting from the point of view of biological plausibility of such algorithms. An advantage of the energy-based parametrization is that at the end of the day we get an energy function which could be used to compute the unnormalized joint probability of visible and latent variables. However, note that in both cases we can easily get an estimator of the log-likelihood by simply using our lower bound L, possibly improved by doing more expensive inference for pTK (sK |sK−1).
4.1 PARAMETRIC TRANSITION OPERATOR
In our experiments we considered Bernoulli and isotropic Gaussian transition operators for binary and real-valued data respectively.
When we sample from the transition operator we do not attempt to pass gradients through the sampling operation. Accordingly, backpropagation is performed locally on each step of the walk-back, and there is no flow of gradient between multiple walk-back steps.
Additionally, we use a “conservative” transition operator that averages its input image together with the sample from the learned distribution (or takes a weighted average with a fixed α weighting) for the transition operator. Just after parameter initialization, the distribution learned by the transition operator’s output is essentially random, so it is very difficult for the network to learn to reconstruct the value at the previous step.
Bernoulli Transition Operator
ρ = sigmoid( (1− α) ∗ xt−1 + α ∗ Fρ(xt−1)
Tt ) (11)
Gaussian Transition Operator
µ = (1− α) ∗ xt−1 + α ∗ Fµ(xt−1) (12)
σ = sigmoid(Tt log(1 + e Fσ(xt−1))) (13)
Fρ, Fµ, Fσ are functions (in our case neural networks) which take the previous x value from the walkback chain and return estimates of the value of µ and σ respectively. T is the temperature which is dependent on the walkback step t. xt−1 is the previous value in the walkback chain.
5 RELATED WORK
Contrastive Divergence
This algorithm is clearly related to the contrastive divergence algorithm with k = T steps (CDk). The CD-k algorithm approximates the log-likelihood gradient by trying to match the sufficient statistics with the data clamped to the sufficient statistics after k steps of the transition operator. The parameter update is the difference of these sufficient statistics, which also corresponds to pushing down the energy of the data-clamped configuration while pushing up the energy of the random variables after k steps of the transition operator.
Two important differences are that, because the temperature is increasing in the variational walkback procedure,
1. the energy gradients ∂E(s)∂s do not cancel each other telescopically along the chain from s0 to sT ,
2. as t increases we move more and more randomly rather than following the energy of the model, allowing to hunt more effectively the areas near spurious modes.
A third difference is that the learning procedure is expressed in terms of the transition operator rather than directly in terms of the energy function. This allows one to thus train a transition operator directly, rather than indirectly via an energy function.
Generative Stochastic Networks
The Generative Stochastic Networks (GSN) algorithm proposed by Bengio et al. (2013b) learns a transition operator by iteratively injecting noise and minimizing the reconstruction error after a number of transition operator steps starting at a data point, and back-propagating through all these steps. One thing in common is the idea of using the walkback intuition instead of isotropic noise in order to converge more efficiently. A major difference is that the algorithm proposed for GSNs involves the minimization of overall reconstruction error (from the input data point x to the sampled reconstruction many steps later). This will tend to blur the learned distribution. Instead, the variational walk-back algorithm minimizes reconstruction error one step at a time along the walk-away trajectory.
In addition, the variational walkback GSNs require back-propagating through all the iterated steps, like the DRAW algorithm (Gregor et al., 2015). Instead the variational walk-back algorithm only requires back-propagating through a single step at a time of the transition operator. This should make it easier to train because we avoid having to optimize a highly non-linear transformation obtained by the composition of many transition operator steps.
Non-Equilibrium Thermodynamics
There are two main differences between the Variational Walkback algorithm and the NonEquilibrium Thermodynamics:
1. Instead of isotropic noise to move away from the data manifold, we propose to use the model’s own transition operator, with the idea that it will “seek and destroy” the spurious modes much more efficiently than random moves.
2. Instead of injecting a fixed amount of noise per time step, we increase the noise as it moves away from the data manifold, and anneal the noise when we are close to the data manifold. This way, we can quickly reach the noise prior without loosing the details of the data. Our model takes significantly fewer steps to walk away and back to the manifold, as compared to the 1000 steps used for Non-Equilibrium Thermodynamics.
Annealed Importance Sampling (AIS)
Annealed Importance Sampling is a sampling procedure. Like variational walkback, it uses an annealing schedule corresponding to a range of temperature from infinity to 1. It is used to estimate a partition function. Unlike Annealed Importance Sampling, variational walkback is meant to provide a good variational lower bound for training a transition operator.
Reverse Annealed Importance Sampling Estimator (RAISE)
RAISE is a reverse AIS, as it starts from a data point and then increases the temperature. In this way it is similar to the Q-chain in variational walkback. The advantage of RAISE over AIS is that it yields an estimator of the log-likelihood that tends to be pessimistic rather than optimistic, which makes it better as an evaluation criteria.
Like AIS, RAISE estimates the log-likelihood using a form of importance sampling, based on a product (over the chain) of the ratios of consecutive probabilities (not conditional probabilities from the model). Variational walkback does not work with estimates of the model’s unconditional probability, and instead works directly with a conditional probability defined by the transition operator. It is for this reason that variational walkback does not need to have an explicit energy function).
6 EXPERIMENTS
We evaluated the variational walkback on three datasets: MNIST, CIFAR (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). The MNIST and CIFAR datasets were used as is, but the aligned and cropped version of the CelebA dataset was scaled from 218 x 178 pixels to 78 x 64 pixels and center-cropped at 64 x 64 pixels (Liu et al., 2015). For all of our experiments we used the Adam optimizer (Kingma & Ba, 2014) and the Theano framework (Al-Rfou et al., 2016). The training procedure and architecture are detailed in appendix A.
We reported samples on CIFAR, MNIST, CelebA and inpainting results on MNIST. Our inpainting results on MNIST are competitive with generative stochastic networks and show somewhat higher consistency between the given part of the image and the generated portion (Bengio et al., 2013c). However, we note that our samples on CIFAR and CelebA show the same “blurring effect” that has been observed with autoencoder-based generative models trained to minimize reconstruction loss (Lamb et al., 2016).
7 CONCLUSION AND FUTURE WORK
We have introduced a new form of walk-back and a new algorithm for learning transition operators or undirected graphical models. Our algorithm learns a transition operator by allowing the model to walk-away from the data towards the noise prior and then teaching it to actually to have its transitions trained to go backwards each of these walk-away steps, i.e., towards the data manifold. Variational walk-back increases the temperature along the chain as it is moving further away from the data manifold, and inversely, anneals the temperature at generation time, as it gets closer to the estimated manifold. This allows the training procedure to quickly find and remove dominant spurious modes. Learning a transition operator also allows our model to learn only a conditional distribution at each step. This is much easier to learn, since it only needs to capture a few modes per step. The model also only locally carves the energy function, which means that it does not have to learn the entire joint probability distribution, but rather steps towards the right direction, making sure that everywhere it puts probability mass as well as around the data, the energy gradient is pointing towards the data.
Our experimental results have confirmed that the model can walk towards the data manifold in a few steps, even when the modes are sharp.
Future work should extend this algorithm and experiments in order to incorporate latent variables. The state would now include both the visible ~x and some latent ~h. Essentially the same procedure can be run, except for the need to initialize the chain with a state ~s = (~x,~h) where ~h would ideally be an estimate of the posterior distribution of ~h given the observed data point ~x. Another interesting direction to expand this work is to replace the log-likelihood objective at each step by a GANlike objective, thus avoiding the need to inject noise independently on each of the pixels, during one application of the transition operator, and allowing the latent variable sampling to inject all the required high-level decisions associated with the transition. Based on the earlier results from Bengio et al. (2013a), sampling in the latent space rather than in the pixel space should allow for better generative models and even better mixing between modes.Bengio et al. (2013b)
ACKNOWLEDGMENTS
The authors would like to thank Benjamin Scellier and Aaron Courville for their helpful feedback and discussions, as well as NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and Compute Canada for computing resources.
A ARCHITECTURE DETAILS
The architecture that was used for the CelebA and CIFAR dataset was similar to the architecture used by Lamb et al. (2016), with a convolutional encoder followed by two fully connected hidden layers, followed by a decoder with strided convolutions (Radford et al., 2015). Batch norm was applied in all layers except for the last layer. For all layers except for the last we used the tanh activation function. Surprisingly, we were unable to obtain good results using the RELU or Leaky RELU activation .
On the binarized MNIST dataset we used a transition operator with Bernoulli outputs. A feedforward neural network was used to estimate the parameters (per-pixel probabilities) for the Bernoulli outputs. This neural network consisted of a single hidden layer with 4096 hidden units and the tanh activation function.
B WALKBACK PROCEDURE DETAILS
The variational walkback algorithm has three unique hyperparameters. One is the number of walkback steps performed during training. Another is the number of walkback steps performed when sampling from the model. Still another is the temperature schedule used during training, reconstruction, or sampling.
The most conservative hyperparameter setting would involve using a large number of walkback steps during training and slowly increasing the temperature. However, this could make training slow, and if too few steps are used, the end of the walkback chain will not match the noise prior, leading to low quality samples.
A dynamic approach to setting the number of walkback steps and temperature schedule may be possible, but in this work we set these hyperparameters empirically. We found that during training using a temperature schedule of T = T0 √ 2t produced good results, where T0 = 1.0 is the initial temperature and t is the step index. During sampling, we found good results using the reverse schedule: T =
√ 2N√ 2t , where t is the step index and N is the total number of sampling steps.
For MNIST, we achieved our results using 8 training steps of walkback. For CIFAR, we used 15 training steps and 20 sampling steps. For CelebA, we used 30 training steps and 35 sampling steps. In general, we found that we could achieve higher quality results by using more steps during sampling then we used during training. We found that more difficult datasets, like CIFAR and CelebA, required longer walkback chains. Finally, our model is able to achieve results competitive with Non-Equilibrium Thermodynamics (Sohl-Dickstein et al., 2015), despite that method requiring chains with far more steps (1000 steps for MNIST).
C ALTERNATIVE FORMULATION OF VARIATIONAL BOUND
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 ( K∏ t=1 pTt(st−1|st) ) p∗(sK)ds K 1 (14)
where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
q(s0, s1, ..., sk) = ( K∏ t=1 qTt(st|st−1) ) q(sK) (15)
giving us:
log p(x) = log ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 (16)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L = ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) log (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 . (17)
D TIGHTNESS OF THE VARIATIONAL BOUND
We present an argument that running the walkback chain for a sufficient number of steps will cause the variational bound to become tight.
Consider a sequence st, ..., s1 generated in that order by our model p through a sequence of applications of the transition operator T, i.e., p(s1, ..., st) = p(st)T (st−1|st)...T (s1|s2), i.e. p(sn−1|sn) = T (sn−1|sn), but note that p(sn|sn−1) 6= p(sn−1|sn). Let pi(s) denote the stationary distribution associated with T. Note that T and pi and related by the detailed balance equation, i.e., T (s|s′)pi(s′) = T (s′|s)pi(s). We want to approximate the posterior
p(st, st−1, ..., s2|s1) = ∏t n=2 p(sn|sn−1)
now by Bayes rule = ∏t n=2 p(sn−1|sn) p(sn) p(sn−1) by telescopic cancellation and definition of T
= p(st)p(s1) ∏t n=2 T (sn−1|sn) now by detailed balance
= p(st)p(s1) ∏t n=2 T (sn|sn−1) pi(sn−1) pi(sn) by telescopic cancellation = p(st)pi(st) pi(s1) p(s1) ∏t n=2 T (sn|sn−1) again by definition of T = p(st)pi(st) pi(s1) p(s1) ∏t n=2 p(sn|sn−1)
So our approximation error in the posterior is the factor p(st)pi(st) pi(s1) p(s1) .
If t is large enough, then s1 (being at the end of the generative sequence) has pretty much converged, i.e., p(s1) ≈ pi(s1). If we throw in temperature annealing along the way (now the notation would have to be changed to put an index n on both p and T), with the initial temperature being very high, then we can hope that the initial Gaussian p(st) is very similar to the stationary distribution at high temperature pi(st).
These arguments suggest that as we make t larger and the final (initial) temperature larger as well, the approximation becomes better. | 1. What is the main contribution of the paper regarding probabilistic models?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the significance of the approach, particularly in comparison to earlier methods?
4. What additional evaluations or improvements would enhance the paper's quality? | Review | Review
The authors present a method for training probabilistic models by maximizing a stochastic variational-lower-bound-type objective. Training involves sampling and then learning a transition-based inference to "walk back" samples to the data. Because of its focus on transitions, it can be used to learn a raw transition operator rather than purely learning an energy-based model. The objective is intuitively appealing because of its similarity to previous successful but less principled training methods for MRFs like Contrastive Divergence.
The idea for the algorithm is appealing, and it looks like it could find a nice place in the literature. However, the submission in its current form is not yet ready for publication. Experiments are qualitative and the generated samples are not obviously indicative of a high model quality. As pointed out elsewhere, the mathematical analysis does not currently demonstrate tightness of the variational bound in the case of a learned transition operator. More evaluation using e.g. annealed importance sampling to estimate held-out likelihoods is necessary. Assuming that the analysis can be repaired, the ability to directly parametrize a transition operator, an interesting strength of this method, should be explored in further experiments and contrasted with the more standard energy-based modeling.
This looks like a promising idea, and other reviews and questions have already raised some important technical points which should help strengthen this paper for future submission. |
ICLR | Title
The Variational Walkback Algorithm
Abstract
A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the “right” direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.
1 INTRODUCTION
Although earlier research focused on generating data through Monte Carlo Markov chains (MCMCs), e.g. with various Boltzmann machines (Salakhutdinov & Hinton, 2009), most of the recent effort in designing deep generative models is based on single-step generation, e.g., with variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014). However, generating a sample by going through a series of stochastic transformations that gradually improve the generated sample (or its latent representation) to make it more plausible could hold some advantages. A generative process can be seen as a mapping from simple noise variates (e.g., uniform, Gaussian) to samples from a very complicated distribution (maybe concentrated near a low-dimensional manifold) approximating the one which we are trying to learn from. If the data distribution is complex (e.g., the corresponding manifold is highly convoluted and non-linear), the generative process may involve a highly non-linear transformation which could be difficult to learn and optimize. Such highly non-linear transformations are probably best represented (and learned) by composing a large number of slightly non-linear transformations, either with a fixed-depth deep network, or with a variable depth recurrent computation, which is what the repeated application of a transition operator corresponds to.
1.1 MOTIVATIONS
The main motivation for the paper are the following.
• The main difference between feedforward generation and recurrent generation is twofold:(1) in the recurrent case, the same parameters are used for each step of the transition
∗[email protected] †[email protected] ‡[email protected] §CIFAR Senior Fellow
operator, and (2) by providing an interpretation of each of these steps as the application of a transition operator, we can design training procedures which do not require backpropagating through all the steps of the unfolded computation (from the raw noise samples to the generated output). This is a potential that clearly deserves to be explored further and motivates the learning framework introduced here.
• Another motivation for the Variational Walkback is the idea that we only need to carve the energy function in the right direction at each point in the space of the random variables of interest, which may sideskip the need to actually sample from the stationary distribution of a Markov chain in order to obtain the gradients of the training objective. The intuition is that if the model’s transition operator wants to move away from the data and into an area without data, this is a clue that the energy gradient is pointing in the wrong direction at that place. Consider a chain of samples following the model’s transition operator (or variants of it at different temperatures), starting at a data point. If the chain moves us away from data points, then we can use the previous state in the chain as a target for the operator when that operator is applied to the next next state, i.e., we want to teach the operator to walk back towards the data. This intuition was already exploited by Bengio et al. (2013c) but without a firm mathematical grounding. In Variational Walkback this is rigorously justified by a variational bound.
• Yet another motivation for the particular approach presented here is that it innovates in the rarely explored direction of parametrizing directly the generative model via a transition operator, rather than via an explicit probability function or energy function. This idea has already been discussed in the context Generative Stochastic Networks (GSNs) (Bengio et al., 2013b), a generalization of denoising auto-encoders (DAEs) (Vincent et al., 2008) which interprets the auto-encoder as estimating the gradient of an energy function (Alain & Bengio, 2014) or as a transition operator (Bengio et al., 2013c). An advantage of being able to parametrize directly the generator is seen with GANs and DAEs: we directly parametrize and learn the function which will be used to perform the task of interest (e.g. generating answers to some questions). Instead, the traditional approach is to parametrize a probability function or energy function (e.g., with a Boltzmann machine) and then then use another procedure (the MCMC method of your choice) to sample from it and do inference. Another important reason for exploring algorithms for directly learning a transition operator is that they put less constraint on the form of the transition operator, compared with a transition operator derived from an energy function. More specifically, neural net implementations of transition operators derived from an MCMC typically require the presence of symmetric weights (due to the symmetry of the second derivative of the energy with respect to a pair of units in the neural network), as discussed by Bengio et al. (2015). When we consider a biologically plausible implementation of these learning algorithms, the weight symmetry constraint (Wij = Wji) is not reasonable as a hard constraint. Instead, if the transition operator (rather than the energy function) is the object being parametrized and learned, then there is no such hard constraint.
1.2 GENERAL THEORY
We introduce a novel variational bound which is an alternative to and improves upon the traditional reconstruction error as a training objective for DAEs and GSNs. Similar variational bounds have been used for VAEs as well as for the non-equilibrium thermodynamics generative models (SohlDickstein et al., 2015). A distribution P over a chain of samples is defined, which corresponds to iteratively applying transition operators with shared parameters, starting from a pure noise initial state. We would like this process to produce training examples. An inverting flow Q is defined starting from a training example (the “walk-away” trajectory), and following the transition operator of the model, i.e., estimating the posterior distribution of the generative chain produced by P , given that it were landing at a training example. If the model does not match the data distribution, that chain Q will tend to walk away from the training samples, and we want to inhibit that by training P to “walk back”. Instead of using a completely different parametrization for the variational approximation of the posterior (theQ distribution), like in VAEs and non-equilibrium dynamics, we propose to exploit the decomposition of P as a series of stochastic transformations in order to parametrize Q with the same parameters as P , with the step-wise estimated posterior matching the correct one (from P ) for all but the last step of the walk-away trajectory. To make the approximation in the
last step of the chain of walk-away steps better (and thus the variational bound tighter) we introduce the idea of gradually increasing temperature at each step of the walk-away Q chain of transitions (or gradually reducing temperature, at each step of the corresponding walkback trajectory under P ). This also has the advantage that the training procedure will more easily converge to and eliminate spurious modes (those modes of the model where there is no nearby training data). This is because the walk-away Q chain will be making large steps towards the dominant and most attractive modes when the temperature becomes large enough. Unless those modes are near data points, the walkback algorithm will thus “seek and destroy” these modes, these spurious modes.
We present a series of experimental results on several datasets illustrating the soundness of the proposed approach on the MNIST, CIFAR-10 and CelebA datasets.
2 MIXING-FREE TRAINING FRAMEWORK BASED ON THE WALKBACK IDEA
2.1 MAXIMUM LIKELIHOOD TRAINING OF UNDIRECTED GRAPHICAL MODELS
Let v denote the vector of visible units and h denote the vector of hidden random variables, with the full state of the model being s = (v,h). Let pθ denote the model distribution, with joint energy function Eθ and parameter vector θ:
pθ(s) := e−Eθ(s)
Zθ , (1)
where Zθ is the partition function
Zθ :=
∫ e−Eθ(s)ds. (2)
Let pD be the training distribution, from which a sample D is typically drawn to obtain the training set. The maximum likelihood parameter gradient is
Ev∼pD [ −∂ log pθ(v)
∂θ
] = Ev∼pD,h∼pθ(h|v) [ ∂Eθ(v,h)
∂θ
] − Es∼pθ(s) [ ∂Eθ(s)
∂θ
] (3)
which is zero when training has converged, with expected energy gradients in the positive phase (under pD(v)pθ(h|v)) matching those under the negative phase (under pθ(s)). Note that in the (common) case of a log-linear model, the energy gradient (with respect to parameters) corresponds to the sufficient statistics of the model. Training thus consists in matching the shape of two distributions, as captured by the sufficient statistics: the positive phase distribution (influenced by the data, via the visible) and the negative phase distribution (where the model is free-running and generating configurations by itself).
2.2 MIXING-FREE TRAINING FRAMEWORK FOR UNDIRECTED GRAPHICAL MODELS
The basic idea of the proposed mixing-free training framework for undirected graphical models is the following. Instead of trying to match the whole positive phase and negative phase distributions (each of which require a difficult sampling operation, generally with an MCMC that may take very long time to mix between well separated modes), we propose to only match the shape of the energy function locally, around well-chosen points st. Another way to think about this is that instead of trying to directly maximize the likelihood of pθ which requires expensive inference (ideally an MCMC) in the inner loop of training (for each example v ∼ pD), we would like to learn a transition operator pT (st+1|st) such that following it at temperature T = 1 would gradually move the state st towards the data generating distribution.
For this purpose, we propose to use a walkback strategy similar to the one introduced by Bengio et al. (2013c), illustrated in Algorithm 1. The idea is to start from a configuration of s which is compatible with the observed data x, let the state evolve according to our transition operator, and then punish it for these moves, making it more likely to make backwards transitions on this trajectory. If learning was completed, the only moves that would remain are those between highly probable configurations under the data generating distribution. The other ones would be “punished”,
like a child walking away from its designated task and forced to walk back (towards the data)1. Following the model’s inclination in order to generate this random trajectory is more efficient than simply adding noise (like in the denoising auto-encoder (Vincent et al., 2008) or the non-equilibrium dynamics (Sohl-Dickstein et al., 2015) algorithms) because it makes the learning procedure focus its computation on state configurations corresponding to spurious modes to be eliminated. To make sure these spurious modes are approached efficiently, the proposed algorithm also includes the idea of gradually increasing temperature (i.e., the amount of noise) along this walk-away trajectory. At high temperature, the transition operator mixes very easily and quickly reaches the areas corresponding to large spurious modes.
Interestingly, all this comes out naturally of the variational bound presented below, rather than as something imposed in addition to the training objective.
Algorithm 1 VariationalWalkback(θ) Train a generative model associated with a transition operator pT (s|s′) at temperature T (temperature 1 for sampling from the actual model). This transition operator injects noise of variance Tσ2 at each step, where σ2 is the noise level at temperature 1. Require: Transition operator pT (s|s′) from which one can both sample and compute the gradient
of log pT (s|s′) with respect to parameters θ, given s and s′. Require: Precomputed σ2data, the overall variance (or squared diameter) of the data.
repeat Tmax ← σ 2 data
σ2
K ← log2 Tmax Sample x ∼ data (or equivalently sample a minibatch to parallelize computation and process each element of the minibatch independently) Let s0 = (x) and initial temperature T = 1, initialize L = 0 for t = 1 to K do
Sample st ∼ pT (s|st−1) Increment L ← L+ log pT (st−1|st) Update parameters with log likelihood gradient ∂ log pT (st−1|st)∂θ Increase temperature with T ← 2T
end for Increment L ← L+ log p∗(sK)
until convergence (monitoring L on a validation set and doing early stopping)
3 VARIATIONAL LOWER BOUND ON THE LOG-LIKELIHOOD
Let us first consider a way in which our model could approximately generate samples according to our model and the associated transition operator pT (s|s′). That process would start by sampling a state sK inside a volume that contains all the data, e.g., with a broad Gaussian p∗(sK) whose variances are set according to the training data. Then we would sample sK−1 from pTmax(s|s′ = sK), where Tmax is a high enough temperature so that the noise dominates the signal and is strong enough to move the state across the whole domain of the data on the visible portion of the state. If σ2data is the maximum variance of the data (corresponding to the visible dimensions of the state) and σ2 is the amount noise injected by the transition operator on the visible units at temperature 1, then we could pick
Tmax = σ2data σ2
(4)
to achieve that goal. From that point on we are going to continue sampling the “previous” state st according to pT (s|s′ = st+1) while gradually cooling the temperature, e.g. by dividing it by 2 after each step. In that case we would need
K = log2 Tmax (5)
1This analogy with a child was first used in talks by Geoff Hinton when discussing constrastive divergence (personal communication)
steps to reach a temperature of 1. Finally, we would look at the visible portion of s0 to obtain the sampled x. In practice, we would expect that a slower annealing schedule would yield samples more in agreement with the stationary distribution of p1(s|s′), but we explored this aggressive annealing schedule in order to obtain faster training.
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)ds K 1 (6)where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
log p(x) = log ∫ sK1 qT0(x)qT1(s1|s0(x, )) ( K∏ t=2 qTt(st|st−1) ) pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 (7)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L
= ∫ sK1
qT0(x)qT1(s1|s0 = x) ( K∏ t=2 qTt(st|st−1) )
log pT0(s0 = x|s1)
(∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0xqT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 . (8)
This bound is valid for any q but will be tight when q(sK , sK−1, . . . , s1|s0) = p(sK , sK−1, . . . , s1|s0), and otherwise can be used to obtain a variational training objective. Note that both q and p can be decomposed as a product of one-step conditionals. Here, we can make most of the qTt transition probabilities match their corresponding pTt transition probabilities exactly, i.e., for 1 ≤ t < K we use qTt(s|s′) = pTt(s|s′). (9) The only approximations will be on both ends of the sequence:
• Sampling exactly from the model’s p(v = x) is typically not feasible exactly (it involves the usual posterior inference, e.g., as used in VAEs) but as explained below we will exploit properties of the algorithm to approximate this efficiently. We call the chosen approximation q1(v).
• At the last step, the optimal qTK (sK |sK−1) is not simply the model’s transition operator at temperature TK , because this conditional also involves the marginal “starting distribution” p∗(sK). However, because we have picked TK large enough to make samples from qTmax(sK |sK−1) dominated by noise of the same variance as that of p∗, we expect the approximation to be good too.
3.1 ESTIMATING THE LOG-LIKELIHOOD USING IMPORTANCE SAMPLING
In practice we cannot compute L exactly (nor its gradient), but we can easily obtain an unbiased estimator of L (or of its gradient) by sampling sK1 from the q distributions, i.e., approximate the L integral by a single Monte-Carlo sample. This is what is done by the training procedure outlined in Algorithm 1, which thus performs stochastic gradient ascent on the variational boundL, and this will
tend to also push up the log-likelihood log p(x) of training examples x. Note that such variational bounds have been used successfully in many learning algorithms in the past (Kingma & Welling, 2013; Lamb et al., 2016).
We derive an estimate of the negative log-likelihood by the following procedure. For each training example x, we sample a large number of diffusion paths. We then use the following formulation to estimate the negative log-likelihood.
log p(x) = logEx∼pD,qT0 (x)qT1 (s1|s0(x,))( ∏K t=2 qTt (st|st−1))pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) (10)
4 TRANSITION OPERATORS FOR VARIATIONAL WALKBACK
Up to now we have not specified what the form of the transition operators should be. Two main variants are possible here. Either we directly parametrize the transition operator, like with denoising auto-encoders or generative stochastic networks, or we obtain our transition operator implicitly from some energy function, for example by applying some form of Gibbs sampling or Langevin MCMC to derive a transition operator associated with the energy function.
An advantage of the direct parametrization is that it eliminates the constraint to have symmetric weights, which is interesting from the point of view of biological plausibility of such algorithms. An advantage of the energy-based parametrization is that at the end of the day we get an energy function which could be used to compute the unnormalized joint probability of visible and latent variables. However, note that in both cases we can easily get an estimator of the log-likelihood by simply using our lower bound L, possibly improved by doing more expensive inference for pTK (sK |sK−1).
4.1 PARAMETRIC TRANSITION OPERATOR
In our experiments we considered Bernoulli and isotropic Gaussian transition operators for binary and real-valued data respectively.
When we sample from the transition operator we do not attempt to pass gradients through the sampling operation. Accordingly, backpropagation is performed locally on each step of the walk-back, and there is no flow of gradient between multiple walk-back steps.
Additionally, we use a “conservative” transition operator that averages its input image together with the sample from the learned distribution (or takes a weighted average with a fixed α weighting) for the transition operator. Just after parameter initialization, the distribution learned by the transition operator’s output is essentially random, so it is very difficult for the network to learn to reconstruct the value at the previous step.
Bernoulli Transition Operator
ρ = sigmoid( (1− α) ∗ xt−1 + α ∗ Fρ(xt−1)
Tt ) (11)
Gaussian Transition Operator
µ = (1− α) ∗ xt−1 + α ∗ Fµ(xt−1) (12)
σ = sigmoid(Tt log(1 + e Fσ(xt−1))) (13)
Fρ, Fµ, Fσ are functions (in our case neural networks) which take the previous x value from the walkback chain and return estimates of the value of µ and σ respectively. T is the temperature which is dependent on the walkback step t. xt−1 is the previous value in the walkback chain.
5 RELATED WORK
Contrastive Divergence
This algorithm is clearly related to the contrastive divergence algorithm with k = T steps (CDk). The CD-k algorithm approximates the log-likelihood gradient by trying to match the sufficient statistics with the data clamped to the sufficient statistics after k steps of the transition operator. The parameter update is the difference of these sufficient statistics, which also corresponds to pushing down the energy of the data-clamped configuration while pushing up the energy of the random variables after k steps of the transition operator.
Two important differences are that, because the temperature is increasing in the variational walkback procedure,
1. the energy gradients ∂E(s)∂s do not cancel each other telescopically along the chain from s0 to sT ,
2. as t increases we move more and more randomly rather than following the energy of the model, allowing to hunt more effectively the areas near spurious modes.
A third difference is that the learning procedure is expressed in terms of the transition operator rather than directly in terms of the energy function. This allows one to thus train a transition operator directly, rather than indirectly via an energy function.
Generative Stochastic Networks
The Generative Stochastic Networks (GSN) algorithm proposed by Bengio et al. (2013b) learns a transition operator by iteratively injecting noise and minimizing the reconstruction error after a number of transition operator steps starting at a data point, and back-propagating through all these steps. One thing in common is the idea of using the walkback intuition instead of isotropic noise in order to converge more efficiently. A major difference is that the algorithm proposed for GSNs involves the minimization of overall reconstruction error (from the input data point x to the sampled reconstruction many steps later). This will tend to blur the learned distribution. Instead, the variational walk-back algorithm minimizes reconstruction error one step at a time along the walk-away trajectory.
In addition, the variational walkback GSNs require back-propagating through all the iterated steps, like the DRAW algorithm (Gregor et al., 2015). Instead the variational walk-back algorithm only requires back-propagating through a single step at a time of the transition operator. This should make it easier to train because we avoid having to optimize a highly non-linear transformation obtained by the composition of many transition operator steps.
Non-Equilibrium Thermodynamics
There are two main differences between the Variational Walkback algorithm and the NonEquilibrium Thermodynamics:
1. Instead of isotropic noise to move away from the data manifold, we propose to use the model’s own transition operator, with the idea that it will “seek and destroy” the spurious modes much more efficiently than random moves.
2. Instead of injecting a fixed amount of noise per time step, we increase the noise as it moves away from the data manifold, and anneal the noise when we are close to the data manifold. This way, we can quickly reach the noise prior without loosing the details of the data. Our model takes significantly fewer steps to walk away and back to the manifold, as compared to the 1000 steps used for Non-Equilibrium Thermodynamics.
Annealed Importance Sampling (AIS)
Annealed Importance Sampling is a sampling procedure. Like variational walkback, it uses an annealing schedule corresponding to a range of temperature from infinity to 1. It is used to estimate a partition function. Unlike Annealed Importance Sampling, variational walkback is meant to provide a good variational lower bound for training a transition operator.
Reverse Annealed Importance Sampling Estimator (RAISE)
RAISE is a reverse AIS, as it starts from a data point and then increases the temperature. In this way it is similar to the Q-chain in variational walkback. The advantage of RAISE over AIS is that it yields an estimator of the log-likelihood that tends to be pessimistic rather than optimistic, which makes it better as an evaluation criteria.
Like AIS, RAISE estimates the log-likelihood using a form of importance sampling, based on a product (over the chain) of the ratios of consecutive probabilities (not conditional probabilities from the model). Variational walkback does not work with estimates of the model’s unconditional probability, and instead works directly with a conditional probability defined by the transition operator. It is for this reason that variational walkback does not need to have an explicit energy function).
6 EXPERIMENTS
We evaluated the variational walkback on three datasets: MNIST, CIFAR (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). The MNIST and CIFAR datasets were used as is, but the aligned and cropped version of the CelebA dataset was scaled from 218 x 178 pixels to 78 x 64 pixels and center-cropped at 64 x 64 pixels (Liu et al., 2015). For all of our experiments we used the Adam optimizer (Kingma & Ba, 2014) and the Theano framework (Al-Rfou et al., 2016). The training procedure and architecture are detailed in appendix A.
We reported samples on CIFAR, MNIST, CelebA and inpainting results on MNIST. Our inpainting results on MNIST are competitive with generative stochastic networks and show somewhat higher consistency between the given part of the image and the generated portion (Bengio et al., 2013c). However, we note that our samples on CIFAR and CelebA show the same “blurring effect” that has been observed with autoencoder-based generative models trained to minimize reconstruction loss (Lamb et al., 2016).
7 CONCLUSION AND FUTURE WORK
We have introduced a new form of walk-back and a new algorithm for learning transition operators or undirected graphical models. Our algorithm learns a transition operator by allowing the model to walk-away from the data towards the noise prior and then teaching it to actually to have its transitions trained to go backwards each of these walk-away steps, i.e., towards the data manifold. Variational walk-back increases the temperature along the chain as it is moving further away from the data manifold, and inversely, anneals the temperature at generation time, as it gets closer to the estimated manifold. This allows the training procedure to quickly find and remove dominant spurious modes. Learning a transition operator also allows our model to learn only a conditional distribution at each step. This is much easier to learn, since it only needs to capture a few modes per step. The model also only locally carves the energy function, which means that it does not have to learn the entire joint probability distribution, but rather steps towards the right direction, making sure that everywhere it puts probability mass as well as around the data, the energy gradient is pointing towards the data.
Our experimental results have confirmed that the model can walk towards the data manifold in a few steps, even when the modes are sharp.
Future work should extend this algorithm and experiments in order to incorporate latent variables. The state would now include both the visible ~x and some latent ~h. Essentially the same procedure can be run, except for the need to initialize the chain with a state ~s = (~x,~h) where ~h would ideally be an estimate of the posterior distribution of ~h given the observed data point ~x. Another interesting direction to expand this work is to replace the log-likelihood objective at each step by a GANlike objective, thus avoiding the need to inject noise independently on each of the pixels, during one application of the transition operator, and allowing the latent variable sampling to inject all the required high-level decisions associated with the transition. Based on the earlier results from Bengio et al. (2013a), sampling in the latent space rather than in the pixel space should allow for better generative models and even better mixing between modes.Bengio et al. (2013b)
ACKNOWLEDGMENTS
The authors would like to thank Benjamin Scellier and Aaron Courville for their helpful feedback and discussions, as well as NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and Compute Canada for computing resources.
A ARCHITECTURE DETAILS
The architecture that was used for the CelebA and CIFAR dataset was similar to the architecture used by Lamb et al. (2016), with a convolutional encoder followed by two fully connected hidden layers, followed by a decoder with strided convolutions (Radford et al., 2015). Batch norm was applied in all layers except for the last layer. For all layers except for the last we used the tanh activation function. Surprisingly, we were unable to obtain good results using the RELU or Leaky RELU activation .
On the binarized MNIST dataset we used a transition operator with Bernoulli outputs. A feedforward neural network was used to estimate the parameters (per-pixel probabilities) for the Bernoulli outputs. This neural network consisted of a single hidden layer with 4096 hidden units and the tanh activation function.
B WALKBACK PROCEDURE DETAILS
The variational walkback algorithm has three unique hyperparameters. One is the number of walkback steps performed during training. Another is the number of walkback steps performed when sampling from the model. Still another is the temperature schedule used during training, reconstruction, or sampling.
The most conservative hyperparameter setting would involve using a large number of walkback steps during training and slowly increasing the temperature. However, this could make training slow, and if too few steps are used, the end of the walkback chain will not match the noise prior, leading to low quality samples.
A dynamic approach to setting the number of walkback steps and temperature schedule may be possible, but in this work we set these hyperparameters empirically. We found that during training using a temperature schedule of T = T0 √ 2t produced good results, where T0 = 1.0 is the initial temperature and t is the step index. During sampling, we found good results using the reverse schedule: T =
√ 2N√ 2t , where t is the step index and N is the total number of sampling steps.
For MNIST, we achieved our results using 8 training steps of walkback. For CIFAR, we used 15 training steps and 20 sampling steps. For CelebA, we used 30 training steps and 35 sampling steps. In general, we found that we could achieve higher quality results by using more steps during sampling then we used during training. We found that more difficult datasets, like CIFAR and CelebA, required longer walkback chains. Finally, our model is able to achieve results competitive with Non-Equilibrium Thermodynamics (Sohl-Dickstein et al., 2015), despite that method requiring chains with far more steps (1000 steps for MNIST).
C ALTERNATIVE FORMULATION OF VARIATIONAL BOUND
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 ( K∏ t=1 pTt(st−1|st) ) p∗(sK)ds K 1 (14)
where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
q(s0, s1, ..., sk) = ( K∏ t=1 qTt(st|st−1) ) q(sK) (15)
giving us:
log p(x) = log ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 (16)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L = ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) log (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 . (17)
D TIGHTNESS OF THE VARIATIONAL BOUND
We present an argument that running the walkback chain for a sufficient number of steps will cause the variational bound to become tight.
Consider a sequence st, ..., s1 generated in that order by our model p through a sequence of applications of the transition operator T, i.e., p(s1, ..., st) = p(st)T (st−1|st)...T (s1|s2), i.e. p(sn−1|sn) = T (sn−1|sn), but note that p(sn|sn−1) 6= p(sn−1|sn). Let pi(s) denote the stationary distribution associated with T. Note that T and pi and related by the detailed balance equation, i.e., T (s|s′)pi(s′) = T (s′|s)pi(s). We want to approximate the posterior
p(st, st−1, ..., s2|s1) = ∏t n=2 p(sn|sn−1)
now by Bayes rule = ∏t n=2 p(sn−1|sn) p(sn) p(sn−1) by telescopic cancellation and definition of T
= p(st)p(s1) ∏t n=2 T (sn−1|sn) now by detailed balance
= p(st)p(s1) ∏t n=2 T (sn|sn−1) pi(sn−1) pi(sn) by telescopic cancellation = p(st)pi(st) pi(s1) p(s1) ∏t n=2 T (sn|sn−1) again by definition of T = p(st)pi(st) pi(s1) p(s1) ∏t n=2 p(sn|sn−1)
So our approximation error in the posterior is the factor p(st)pi(st) pi(s1) p(s1) .
If t is large enough, then s1 (being at the end of the generative sequence) has pretty much converged, i.e., p(s1) ≈ pi(s1). If we throw in temperature annealing along the way (now the notation would have to be changed to put an index n on both p and T), with the initial temperature being very high, then we can hope that the initial Gaussian p(st) is very similar to the stationary distribution at high temperature pi(st).
These arguments suggest that as we make t larger and the final (initial) temperature larger as well, the approximation becomes better. | 1. What is the reviewer's main concern regarding the paper's approach to learning a transition distribution?
2. Why does the reviewer think the experimental results are not visually impressive?
3. What does the reviewer suggest as a solution to the issue of the forward and reverse trajectories being mismatched?
4. What is the significance of the assumption that the transition distribution obeys detailed balance, according to the reviewer?
5. Why does the reviewer recommend reporting log likelihoods against competing methods?
6. What are some specific suggestions the reviewer has for improving the paper's writing style and clarity? | Review | Review
I very much like the underlying idea for this paper. I wasn't convinced by the execution in its current state. My primary concern is the one I expressed in my pre-review question below, which I don't think the authors addressed. Specifically, I think the choice of q(s | s') = p(s | s') will make the forward and reverse trajectories almost pathologically mismatched to each other, and will thus make the variational bound extremely loose and high variance.
The claim about the tightness of the bound in Appendix D relies on the assumption that the transition distribution obeys detailed balance. The learned transition distribution in the paper does not obey detailed balance, and therefore the tightness claim in Appendix D does not hold. (In Section 2.1 you briefly discuss the idea of learning an energy function, rather than directly learning a transition distribution. I think this would be excellent, and in that case you could choose an MCMC transition operator that does obey detailed balance for that energy function.) I did not go through Appendix D beyond this step.
The experimental results were not visually impressive. I suspect this is primarily driven by the mismatch between generative and inference trajectories. See my concern above and in the pre-review question below.
Also, see note below for sec. 5. I suspect some terms are being dropped from the training gradient.
The paper is optimizing a variational bound on log likelihood. You should really, really, really report and compare log likelihoods against competing methods!
Detailed comments below. Some of these were written based on a previous version of the paper.
sec 1.2 - first paragraph is very difficult to follow
"these modes these spurious modes" -> "these spurious modes"
sec 2.1 - "s = (v,h)" -> "s = {v,h}"
sec 2.2 - "with an MCMC" -> "with an MCMC chain"
"(ideally an MCMC)" -> "(e.g. via MCMC)" MCMC is not ideal ... it's just often the best we can do.
sec 3, last bullet - could make the temperature infinite for the last step, in which case the last step will sample directly from the prior, and the posterior and the prior will be exactly the same.
sec. 4 -- Using an energy function would be great!! Especially, because many MCMC transition operators obey detailed balance, you would be far less prone to suffer from the forward/backward transition mismatch that is my primary concern about this technique.
eq. 12,13 -- What is alpha? How does it depend on the temperature. It's never specified.
sec. 5, last paragraph in GSN section -- Note that q also depends on theta, so by not backpropagating through the full q chain you are dropping terms from the gradient.
sec. 5, non-equilibrium thermodynamics -- Note that the noneq. paper also increases the noise variance as the distance from the data increases.
Fig. 1 -- right/left mislabeled
Fig. 2 -- label panes
Fig. 3 -- After how many walkback steps? |
ICLR | Title
The Variational Walkback Algorithm
Abstract
A recognized obstacle to training undirected graphical models with latent variables such as Boltzmann machines is that the maximum likelihood training procedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gradient points in the “right” direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics, we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk backwards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e., the parameters are used to directly define the transition operator. This also eliminates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible.
1 INTRODUCTION
Although earlier research focused on generating data through Monte Carlo Markov chains (MCMCs), e.g. with various Boltzmann machines (Salakhutdinov & Hinton, 2009), most of the recent effort in designing deep generative models is based on single-step generation, e.g., with variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) and generative adversarial networks (GANs) (Goodfellow et al., 2014). However, generating a sample by going through a series of stochastic transformations that gradually improve the generated sample (or its latent representation) to make it more plausible could hold some advantages. A generative process can be seen as a mapping from simple noise variates (e.g., uniform, Gaussian) to samples from a very complicated distribution (maybe concentrated near a low-dimensional manifold) approximating the one which we are trying to learn from. If the data distribution is complex (e.g., the corresponding manifold is highly convoluted and non-linear), the generative process may involve a highly non-linear transformation which could be difficult to learn and optimize. Such highly non-linear transformations are probably best represented (and learned) by composing a large number of slightly non-linear transformations, either with a fixed-depth deep network, or with a variable depth recurrent computation, which is what the repeated application of a transition operator corresponds to.
1.1 MOTIVATIONS
The main motivation for the paper are the following.
• The main difference between feedforward generation and recurrent generation is twofold:(1) in the recurrent case, the same parameters are used for each step of the transition
∗[email protected] †[email protected] ‡[email protected] §CIFAR Senior Fellow
operator, and (2) by providing an interpretation of each of these steps as the application of a transition operator, we can design training procedures which do not require backpropagating through all the steps of the unfolded computation (from the raw noise samples to the generated output). This is a potential that clearly deserves to be explored further and motivates the learning framework introduced here.
• Another motivation for the Variational Walkback is the idea that we only need to carve the energy function in the right direction at each point in the space of the random variables of interest, which may sideskip the need to actually sample from the stationary distribution of a Markov chain in order to obtain the gradients of the training objective. The intuition is that if the model’s transition operator wants to move away from the data and into an area without data, this is a clue that the energy gradient is pointing in the wrong direction at that place. Consider a chain of samples following the model’s transition operator (or variants of it at different temperatures), starting at a data point. If the chain moves us away from data points, then we can use the previous state in the chain as a target for the operator when that operator is applied to the next next state, i.e., we want to teach the operator to walk back towards the data. This intuition was already exploited by Bengio et al. (2013c) but without a firm mathematical grounding. In Variational Walkback this is rigorously justified by a variational bound.
• Yet another motivation for the particular approach presented here is that it innovates in the rarely explored direction of parametrizing directly the generative model via a transition operator, rather than via an explicit probability function or energy function. This idea has already been discussed in the context Generative Stochastic Networks (GSNs) (Bengio et al., 2013b), a generalization of denoising auto-encoders (DAEs) (Vincent et al., 2008) which interprets the auto-encoder as estimating the gradient of an energy function (Alain & Bengio, 2014) or as a transition operator (Bengio et al., 2013c). An advantage of being able to parametrize directly the generator is seen with GANs and DAEs: we directly parametrize and learn the function which will be used to perform the task of interest (e.g. generating answers to some questions). Instead, the traditional approach is to parametrize a probability function or energy function (e.g., with a Boltzmann machine) and then then use another procedure (the MCMC method of your choice) to sample from it and do inference. Another important reason for exploring algorithms for directly learning a transition operator is that they put less constraint on the form of the transition operator, compared with a transition operator derived from an energy function. More specifically, neural net implementations of transition operators derived from an MCMC typically require the presence of symmetric weights (due to the symmetry of the second derivative of the energy with respect to a pair of units in the neural network), as discussed by Bengio et al. (2015). When we consider a biologically plausible implementation of these learning algorithms, the weight symmetry constraint (Wij = Wji) is not reasonable as a hard constraint. Instead, if the transition operator (rather than the energy function) is the object being parametrized and learned, then there is no such hard constraint.
1.2 GENERAL THEORY
We introduce a novel variational bound which is an alternative to and improves upon the traditional reconstruction error as a training objective for DAEs and GSNs. Similar variational bounds have been used for VAEs as well as for the non-equilibrium thermodynamics generative models (SohlDickstein et al., 2015). A distribution P over a chain of samples is defined, which corresponds to iteratively applying transition operators with shared parameters, starting from a pure noise initial state. We would like this process to produce training examples. An inverting flow Q is defined starting from a training example (the “walk-away” trajectory), and following the transition operator of the model, i.e., estimating the posterior distribution of the generative chain produced by P , given that it were landing at a training example. If the model does not match the data distribution, that chain Q will tend to walk away from the training samples, and we want to inhibit that by training P to “walk back”. Instead of using a completely different parametrization for the variational approximation of the posterior (theQ distribution), like in VAEs and non-equilibrium dynamics, we propose to exploit the decomposition of P as a series of stochastic transformations in order to parametrize Q with the same parameters as P , with the step-wise estimated posterior matching the correct one (from P ) for all but the last step of the walk-away trajectory. To make the approximation in the
last step of the chain of walk-away steps better (and thus the variational bound tighter) we introduce the idea of gradually increasing temperature at each step of the walk-away Q chain of transitions (or gradually reducing temperature, at each step of the corresponding walkback trajectory under P ). This also has the advantage that the training procedure will more easily converge to and eliminate spurious modes (those modes of the model where there is no nearby training data). This is because the walk-away Q chain will be making large steps towards the dominant and most attractive modes when the temperature becomes large enough. Unless those modes are near data points, the walkback algorithm will thus “seek and destroy” these modes, these spurious modes.
We present a series of experimental results on several datasets illustrating the soundness of the proposed approach on the MNIST, CIFAR-10 and CelebA datasets.
2 MIXING-FREE TRAINING FRAMEWORK BASED ON THE WALKBACK IDEA
2.1 MAXIMUM LIKELIHOOD TRAINING OF UNDIRECTED GRAPHICAL MODELS
Let v denote the vector of visible units and h denote the vector of hidden random variables, with the full state of the model being s = (v,h). Let pθ denote the model distribution, with joint energy function Eθ and parameter vector θ:
pθ(s) := e−Eθ(s)
Zθ , (1)
where Zθ is the partition function
Zθ :=
∫ e−Eθ(s)ds. (2)
Let pD be the training distribution, from which a sample D is typically drawn to obtain the training set. The maximum likelihood parameter gradient is
Ev∼pD [ −∂ log pθ(v)
∂θ
] = Ev∼pD,h∼pθ(h|v) [ ∂Eθ(v,h)
∂θ
] − Es∼pθ(s) [ ∂Eθ(s)
∂θ
] (3)
which is zero when training has converged, with expected energy gradients in the positive phase (under pD(v)pθ(h|v)) matching those under the negative phase (under pθ(s)). Note that in the (common) case of a log-linear model, the energy gradient (with respect to parameters) corresponds to the sufficient statistics of the model. Training thus consists in matching the shape of two distributions, as captured by the sufficient statistics: the positive phase distribution (influenced by the data, via the visible) and the negative phase distribution (where the model is free-running and generating configurations by itself).
2.2 MIXING-FREE TRAINING FRAMEWORK FOR UNDIRECTED GRAPHICAL MODELS
The basic idea of the proposed mixing-free training framework for undirected graphical models is the following. Instead of trying to match the whole positive phase and negative phase distributions (each of which require a difficult sampling operation, generally with an MCMC that may take very long time to mix between well separated modes), we propose to only match the shape of the energy function locally, around well-chosen points st. Another way to think about this is that instead of trying to directly maximize the likelihood of pθ which requires expensive inference (ideally an MCMC) in the inner loop of training (for each example v ∼ pD), we would like to learn a transition operator pT (st+1|st) such that following it at temperature T = 1 would gradually move the state st towards the data generating distribution.
For this purpose, we propose to use a walkback strategy similar to the one introduced by Bengio et al. (2013c), illustrated in Algorithm 1. The idea is to start from a configuration of s which is compatible with the observed data x, let the state evolve according to our transition operator, and then punish it for these moves, making it more likely to make backwards transitions on this trajectory. If learning was completed, the only moves that would remain are those between highly probable configurations under the data generating distribution. The other ones would be “punished”,
like a child walking away from its designated task and forced to walk back (towards the data)1. Following the model’s inclination in order to generate this random trajectory is more efficient than simply adding noise (like in the denoising auto-encoder (Vincent et al., 2008) or the non-equilibrium dynamics (Sohl-Dickstein et al., 2015) algorithms) because it makes the learning procedure focus its computation on state configurations corresponding to spurious modes to be eliminated. To make sure these spurious modes are approached efficiently, the proposed algorithm also includes the idea of gradually increasing temperature (i.e., the amount of noise) along this walk-away trajectory. At high temperature, the transition operator mixes very easily and quickly reaches the areas corresponding to large spurious modes.
Interestingly, all this comes out naturally of the variational bound presented below, rather than as something imposed in addition to the training objective.
Algorithm 1 VariationalWalkback(θ) Train a generative model associated with a transition operator pT (s|s′) at temperature T (temperature 1 for sampling from the actual model). This transition operator injects noise of variance Tσ2 at each step, where σ2 is the noise level at temperature 1. Require: Transition operator pT (s|s′) from which one can both sample and compute the gradient
of log pT (s|s′) with respect to parameters θ, given s and s′. Require: Precomputed σ2data, the overall variance (or squared diameter) of the data.
repeat Tmax ← σ 2 data
σ2
K ← log2 Tmax Sample x ∼ data (or equivalently sample a minibatch to parallelize computation and process each element of the minibatch independently) Let s0 = (x) and initial temperature T = 1, initialize L = 0 for t = 1 to K do
Sample st ∼ pT (s|st−1) Increment L ← L+ log pT (st−1|st) Update parameters with log likelihood gradient ∂ log pT (st−1|st)∂θ Increase temperature with T ← 2T
end for Increment L ← L+ log p∗(sK)
until convergence (monitoring L on a validation set and doing early stopping)
3 VARIATIONAL LOWER BOUND ON THE LOG-LIKELIHOOD
Let us first consider a way in which our model could approximately generate samples according to our model and the associated transition operator pT (s|s′). That process would start by sampling a state sK inside a volume that contains all the data, e.g., with a broad Gaussian p∗(sK) whose variances are set according to the training data. Then we would sample sK−1 from pTmax(s|s′ = sK), where Tmax is a high enough temperature so that the noise dominates the signal and is strong enough to move the state across the whole domain of the data on the visible portion of the state. If σ2data is the maximum variance of the data (corresponding to the visible dimensions of the state) and σ2 is the amount noise injected by the transition operator on the visible units at temperature 1, then we could pick
Tmax = σ2data σ2
(4)
to achieve that goal. From that point on we are going to continue sampling the “previous” state st according to pT (s|s′ = st+1) while gradually cooling the temperature, e.g. by dividing it by 2 after each step. In that case we would need
K = log2 Tmax (5)
1This analogy with a child was first used in talks by Geoff Hinton when discussing constrastive divergence (personal communication)
steps to reach a temperature of 1. Finally, we would look at the visible portion of s0 to obtain the sampled x. In practice, we would expect that a slower annealing schedule would yield samples more in agreement with the stationary distribution of p1(s|s′), but we explored this aggressive annealing schedule in order to obtain faster training.
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)ds K 1 (6)where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
log p(x) = log ∫ sK1 qT0(x)qT1(s1|s0(x, )) ( K∏ t=2 qTt(st|st−1) ) pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 (7)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L
= ∫ sK1
qT0(x)qT1(s1|s0 = x) ( K∏ t=2 qTt(st|st−1) )
log pT0(s0 = x|s1)
(∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0xqT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) dsK1 . (8)
This bound is valid for any q but will be tight when q(sK , sK−1, . . . , s1|s0) = p(sK , sK−1, . . . , s1|s0), and otherwise can be used to obtain a variational training objective. Note that both q and p can be decomposed as a product of one-step conditionals. Here, we can make most of the qTt transition probabilities match their corresponding pTt transition probabilities exactly, i.e., for 1 ≤ t < K we use qTt(s|s′) = pTt(s|s′). (9) The only approximations will be on both ends of the sequence:
• Sampling exactly from the model’s p(v = x) is typically not feasible exactly (it involves the usual posterior inference, e.g., as used in VAEs) but as explained below we will exploit properties of the algorithm to approximate this efficiently. We call the chosen approximation q1(v).
• At the last step, the optimal qTK (sK |sK−1) is not simply the model’s transition operator at temperature TK , because this conditional also involves the marginal “starting distribution” p∗(sK). However, because we have picked TK large enough to make samples from qTmax(sK |sK−1) dominated by noise of the same variance as that of p∗, we expect the approximation to be good too.
3.1 ESTIMATING THE LOG-LIKELIHOOD USING IMPORTANCE SAMPLING
In practice we cannot compute L exactly (nor its gradient), but we can easily obtain an unbiased estimator of L (or of its gradient) by sampling sK1 from the q distributions, i.e., approximate the L integral by a single Monte-Carlo sample. This is what is done by the training procedure outlined in Algorithm 1, which thus performs stochastic gradient ascent on the variational boundL, and this will
tend to also push up the log-likelihood log p(x) of training examples x. Note that such variational bounds have been used successfully in many learning algorithms in the past (Kingma & Welling, 2013; Lamb et al., 2016).
We derive an estimate of the negative log-likelihood by the following procedure. For each training example x, we sample a large number of diffusion paths. We then use the following formulation to estimate the negative log-likelihood.
log p(x) = logEx∼pD,qT0 (x)qT1 (s1|s0(x,))( ∏K t=2 qTt (st|st−1))pT0(s0 = x|s1) (∏K t=2 pTt(st−1|st) ) p∗(sK)
qT0(x)qT1(s1|s0 = x) (∏K t=2 qTt(st|st−1) ) (10)
4 TRANSITION OPERATORS FOR VARIATIONAL WALKBACK
Up to now we have not specified what the form of the transition operators should be. Two main variants are possible here. Either we directly parametrize the transition operator, like with denoising auto-encoders or generative stochastic networks, or we obtain our transition operator implicitly from some energy function, for example by applying some form of Gibbs sampling or Langevin MCMC to derive a transition operator associated with the energy function.
An advantage of the direct parametrization is that it eliminates the constraint to have symmetric weights, which is interesting from the point of view of biological plausibility of such algorithms. An advantage of the energy-based parametrization is that at the end of the day we get an energy function which could be used to compute the unnormalized joint probability of visible and latent variables. However, note that in both cases we can easily get an estimator of the log-likelihood by simply using our lower bound L, possibly improved by doing more expensive inference for pTK (sK |sK−1).
4.1 PARAMETRIC TRANSITION OPERATOR
In our experiments we considered Bernoulli and isotropic Gaussian transition operators for binary and real-valued data respectively.
When we sample from the transition operator we do not attempt to pass gradients through the sampling operation. Accordingly, backpropagation is performed locally on each step of the walk-back, and there is no flow of gradient between multiple walk-back steps.
Additionally, we use a “conservative” transition operator that averages its input image together with the sample from the learned distribution (or takes a weighted average with a fixed α weighting) for the transition operator. Just after parameter initialization, the distribution learned by the transition operator’s output is essentially random, so it is very difficult for the network to learn to reconstruct the value at the previous step.
Bernoulli Transition Operator
ρ = sigmoid( (1− α) ∗ xt−1 + α ∗ Fρ(xt−1)
Tt ) (11)
Gaussian Transition Operator
µ = (1− α) ∗ xt−1 + α ∗ Fµ(xt−1) (12)
σ = sigmoid(Tt log(1 + e Fσ(xt−1))) (13)
Fρ, Fµ, Fσ are functions (in our case neural networks) which take the previous x value from the walkback chain and return estimates of the value of µ and σ respectively. T is the temperature which is dependent on the walkback step t. xt−1 is the previous value in the walkback chain.
5 RELATED WORK
Contrastive Divergence
This algorithm is clearly related to the contrastive divergence algorithm with k = T steps (CDk). The CD-k algorithm approximates the log-likelihood gradient by trying to match the sufficient statistics with the data clamped to the sufficient statistics after k steps of the transition operator. The parameter update is the difference of these sufficient statistics, which also corresponds to pushing down the energy of the data-clamped configuration while pushing up the energy of the random variables after k steps of the transition operator.
Two important differences are that, because the temperature is increasing in the variational walkback procedure,
1. the energy gradients ∂E(s)∂s do not cancel each other telescopically along the chain from s0 to sT ,
2. as t increases we move more and more randomly rather than following the energy of the model, allowing to hunt more effectively the areas near spurious modes.
A third difference is that the learning procedure is expressed in terms of the transition operator rather than directly in terms of the energy function. This allows one to thus train a transition operator directly, rather than indirectly via an energy function.
Generative Stochastic Networks
The Generative Stochastic Networks (GSN) algorithm proposed by Bengio et al. (2013b) learns a transition operator by iteratively injecting noise and minimizing the reconstruction error after a number of transition operator steps starting at a data point, and back-propagating through all these steps. One thing in common is the idea of using the walkback intuition instead of isotropic noise in order to converge more efficiently. A major difference is that the algorithm proposed for GSNs involves the minimization of overall reconstruction error (from the input data point x to the sampled reconstruction many steps later). This will tend to blur the learned distribution. Instead, the variational walk-back algorithm minimizes reconstruction error one step at a time along the walk-away trajectory.
In addition, the variational walkback GSNs require back-propagating through all the iterated steps, like the DRAW algorithm (Gregor et al., 2015). Instead the variational walk-back algorithm only requires back-propagating through a single step at a time of the transition operator. This should make it easier to train because we avoid having to optimize a highly non-linear transformation obtained by the composition of many transition operator steps.
Non-Equilibrium Thermodynamics
There are two main differences between the Variational Walkback algorithm and the NonEquilibrium Thermodynamics:
1. Instead of isotropic noise to move away from the data manifold, we propose to use the model’s own transition operator, with the idea that it will “seek and destroy” the spurious modes much more efficiently than random moves.
2. Instead of injecting a fixed amount of noise per time step, we increase the noise as it moves away from the data manifold, and anneal the noise when we are close to the data manifold. This way, we can quickly reach the noise prior without loosing the details of the data. Our model takes significantly fewer steps to walk away and back to the manifold, as compared to the 1000 steps used for Non-Equilibrium Thermodynamics.
Annealed Importance Sampling (AIS)
Annealed Importance Sampling is a sampling procedure. Like variational walkback, it uses an annealing schedule corresponding to a range of temperature from infinity to 1. It is used to estimate a partition function. Unlike Annealed Importance Sampling, variational walkback is meant to provide a good variational lower bound for training a transition operator.
Reverse Annealed Importance Sampling Estimator (RAISE)
RAISE is a reverse AIS, as it starts from a data point and then increases the temperature. In this way it is similar to the Q-chain in variational walkback. The advantage of RAISE over AIS is that it yields an estimator of the log-likelihood that tends to be pessimistic rather than optimistic, which makes it better as an evaluation criteria.
Like AIS, RAISE estimates the log-likelihood using a form of importance sampling, based on a product (over the chain) of the ratios of consecutive probabilities (not conditional probabilities from the model). Variational walkback does not work with estimates of the model’s unconditional probability, and instead works directly with a conditional probability defined by the transition operator. It is for this reason that variational walkback does not need to have an explicit energy function).
6 EXPERIMENTS
We evaluated the variational walkback on three datasets: MNIST, CIFAR (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). The MNIST and CIFAR datasets were used as is, but the aligned and cropped version of the CelebA dataset was scaled from 218 x 178 pixels to 78 x 64 pixels and center-cropped at 64 x 64 pixels (Liu et al., 2015). For all of our experiments we used the Adam optimizer (Kingma & Ba, 2014) and the Theano framework (Al-Rfou et al., 2016). The training procedure and architecture are detailed in appendix A.
We reported samples on CIFAR, MNIST, CelebA and inpainting results on MNIST. Our inpainting results on MNIST are competitive with generative stochastic networks and show somewhat higher consistency between the given part of the image and the generated portion (Bengio et al., 2013c). However, we note that our samples on CIFAR and CelebA show the same “blurring effect” that has been observed with autoencoder-based generative models trained to minimize reconstruction loss (Lamb et al., 2016).
7 CONCLUSION AND FUTURE WORK
We have introduced a new form of walk-back and a new algorithm for learning transition operators or undirected graphical models. Our algorithm learns a transition operator by allowing the model to walk-away from the data towards the noise prior and then teaching it to actually to have its transitions trained to go backwards each of these walk-away steps, i.e., towards the data manifold. Variational walk-back increases the temperature along the chain as it is moving further away from the data manifold, and inversely, anneals the temperature at generation time, as it gets closer to the estimated manifold. This allows the training procedure to quickly find and remove dominant spurious modes. Learning a transition operator also allows our model to learn only a conditional distribution at each step. This is much easier to learn, since it only needs to capture a few modes per step. The model also only locally carves the energy function, which means that it does not have to learn the entire joint probability distribution, but rather steps towards the right direction, making sure that everywhere it puts probability mass as well as around the data, the energy gradient is pointing towards the data.
Our experimental results have confirmed that the model can walk towards the data manifold in a few steps, even when the modes are sharp.
Future work should extend this algorithm and experiments in order to incorporate latent variables. The state would now include both the visible ~x and some latent ~h. Essentially the same procedure can be run, except for the need to initialize the chain with a state ~s = (~x,~h) where ~h would ideally be an estimate of the posterior distribution of ~h given the observed data point ~x. Another interesting direction to expand this work is to replace the log-likelihood objective at each step by a GANlike objective, thus avoiding the need to inject noise independently on each of the pixels, during one application of the transition operator, and allowing the latent variable sampling to inject all the required high-level decisions associated with the transition. Based on the earlier results from Bengio et al. (2013a), sampling in the latent space rather than in the pixel space should allow for better generative models and even better mixing between modes.Bengio et al. (2013b)
ACKNOWLEDGMENTS
The authors would like to thank Benjamin Scellier and Aaron Courville for their helpful feedback and discussions, as well as NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and Compute Canada for computing resources.
A ARCHITECTURE DETAILS
The architecture that was used for the CelebA and CIFAR dataset was similar to the architecture used by Lamb et al. (2016), with a convolutional encoder followed by two fully connected hidden layers, followed by a decoder with strided convolutions (Radford et al., 2015). Batch norm was applied in all layers except for the last layer. For all layers except for the last we used the tanh activation function. Surprisingly, we were unable to obtain good results using the RELU or Leaky RELU activation .
On the binarized MNIST dataset we used a transition operator with Bernoulli outputs. A feedforward neural network was used to estimate the parameters (per-pixel probabilities) for the Bernoulli outputs. This neural network consisted of a single hidden layer with 4096 hidden units and the tanh activation function.
B WALKBACK PROCEDURE DETAILS
The variational walkback algorithm has three unique hyperparameters. One is the number of walkback steps performed during training. Another is the number of walkback steps performed when sampling from the model. Still another is the temperature schedule used during training, reconstruction, or sampling.
The most conservative hyperparameter setting would involve using a large number of walkback steps during training and slowly increasing the temperature. However, this could make training slow, and if too few steps are used, the end of the walkback chain will not match the noise prior, leading to low quality samples.
A dynamic approach to setting the number of walkback steps and temperature schedule may be possible, but in this work we set these hyperparameters empirically. We found that during training using a temperature schedule of T = T0 √ 2t produced good results, where T0 = 1.0 is the initial temperature and t is the step index. During sampling, we found good results using the reverse schedule: T =
√ 2N√ 2t , where t is the step index and N is the total number of sampling steps.
For MNIST, we achieved our results using 8 training steps of walkback. For CIFAR, we used 15 training steps and 20 sampling steps. For CelebA, we used 30 training steps and 35 sampling steps. In general, we found that we could achieve higher quality results by using more steps during sampling then we used during training. We found that more difficult datasets, like CIFAR and CelebA, required longer walkback chains. Finally, our model is able to achieve results competitive with Non-Equilibrium Thermodynamics (Sohl-Dickstein et al., 2015), despite that method requiring chains with far more steps (1000 steps for MNIST).
C ALTERNATIVE FORMULATION OF VARIATIONAL BOUND
The marginal probability of v = x at the end of the above K-step process is thus
p(x) = ∫ sK1 ( K∏ t=1 pTt(st−1|st) ) p∗(sK)ds K 1 (14)
where Tt is an annealing schedule with T0 = 1 and TK = Tmax and p∗ is the “starting distribution”, such as the Gaussian of variance σ2data. We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution q(s1, . . . , sK) decomposed into conditionals qTt(st|st−1):
q(s0, s1, ..., sk) = ( K∏ t=1 qTt(st|st−1) ) q(sK) (15)
giving us:
log p(x) = log ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 (16)
where we understand that s0 = x. Now we can apply Jensen’s inequality as usual to obtain the variational bound
log p(x) ≥ L = ∫ sK1 qT0(x) ( K∏ t=1 qTt(st|st−1) ) log (∏K t=1 pTt(st−1|st) ) p∗(sK) qT0(x) (∏K t=1 qTt(st|st−1) ) dsK1 . (17)
D TIGHTNESS OF THE VARIATIONAL BOUND
We present an argument that running the walkback chain for a sufficient number of steps will cause the variational bound to become tight.
Consider a sequence st, ..., s1 generated in that order by our model p through a sequence of applications of the transition operator T, i.e., p(s1, ..., st) = p(st)T (st−1|st)...T (s1|s2), i.e. p(sn−1|sn) = T (sn−1|sn), but note that p(sn|sn−1) 6= p(sn−1|sn). Let pi(s) denote the stationary distribution associated with T. Note that T and pi and related by the detailed balance equation, i.e., T (s|s′)pi(s′) = T (s′|s)pi(s). We want to approximate the posterior
p(st, st−1, ..., s2|s1) = ∏t n=2 p(sn|sn−1)
now by Bayes rule = ∏t n=2 p(sn−1|sn) p(sn) p(sn−1) by telescopic cancellation and definition of T
= p(st)p(s1) ∏t n=2 T (sn−1|sn) now by detailed balance
= p(st)p(s1) ∏t n=2 T (sn|sn−1) pi(sn−1) pi(sn) by telescopic cancellation = p(st)pi(st) pi(s1) p(s1) ∏t n=2 T (sn|sn−1) again by definition of T = p(st)pi(st) pi(s1) p(s1) ∏t n=2 p(sn|sn−1)
So our approximation error in the posterior is the factor p(st)pi(st) pi(s1) p(s1) .
If t is large enough, then s1 (being at the end of the generative sequence) has pretty much converged, i.e., p(s1) ≈ pi(s1). If we throw in temperature annealing along the way (now the notation would have to be changed to put an index n on both p and T), with the initial temperature being very high, then we can hope that the initial Gaussian p(st) is very similar to the stationary distribution at high temperature pi(st).
These arguments suggest that as we make t larger and the final (initial) temperature larger as well, the approximation becomes better. | 1. What is the main contribution of the paper, and how does it relate to prior work in the field?
2. How does the proposed method differ from existing algorithms, specifically AIS and RAISE?
3. What is the significance of the variational lower bound on the log-likelihood, and how does it compare to other methods?
4. Why is the analysis in Appendix D incorrect, and what is the correct way to analyze the ratio of prior and posterior probabilities?
5. How does the proposed method perform compared to other methods, such as RAISE, and why is it worthwhile to report log-likelihood estimates?
6. Are there any limitations or potential drawbacks to the proposed method, and how might they be addressed? | Review | Review
This paper proposes a new kind of generative model based on an annealing process, where the transition probabilities are learned directly to maximize a variational lower bound on the log-likelihood. Overall, the idea is clever and appealing, but I think the paper needs more quantitative validation and better discussion of the relationship with prior work.
In terms of prior work, AIS and RAISE are both closely related algorithms, and share much of the mathematical structure with the proposed method. For this reason, it’s not sufficient to mention them in passing in the related work section; those methods and their relationship to variational walkback need to be discussed in detail. If I understand correctly, the proposed method is essentially an extension of RAISE where the transition probabilities are learned rather than fixed based on an existing MRF. I think this is an interesting and worthwhile extension, but the relationship to existing work needs to be clarified.
The analysis of Appendix D seems incorrect. It derives a formula for the ratios of prior and posterior probabilities, but this formula only holds under the assumption of constant temperature (in which case the ratio is very large). When the temperature is varied, the analysis of Neal (2001) applies, and the answer is different.
One of the main selling points of the method is that it optimizes a variational lower bound on the log-likelihood; even more accurate estimates can be obtained using importance sampling. It ought to be easy to report log-likelihood estimates for this method, so I wonder why such estimates aren’t reported. There are lots of prior results to compare against on MNIST. (In addition, a natural baseline would be RAISE, so that one can check if the ability to learn the transitions actually helps.)
I think the basic idea here is a sound one, so I would be willing to raise my score if the above issues are addressed in a revised version.
Minor comments:
“A recognized obstacle to training undirected graphical models… is that ML training requires sampling from MCMC chains in the inner loop of training, for each example.” This seems like an unfair characterization, since the standard algorithm is PCD, which usually takes only a single step per mini-batch.
Some of the methods discussed in the related work are missing citations.
The method is justified in terms of “carving the energy function in the right direction at each point”, but I’m not sure this is actually what’s happening. Isn’t the point of the method that it can optimize a lower bound on the log-likelihood, and therefore learn a globally correct allocation of probability mass? |
ICLR | Title
On the "steerability" of generative adversarial networks
Abstract
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise – these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by “steering” in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
1 INTRODUCTION
The quality of deep generative models has increased dramatically over the past few years. When introduced in 2014, Generative Adversarial Networks (GANs) could only synthesize MNIST digits and low-resolution grayscale faces (Goodfellow et al., 2014). The most recent models, however, produce diverse high-resolution images that are often indistinguishable from natural photos (Brock et al., 2018; Karras et al., 2018).
Science fiction has long dreamed of virtual realities filled of synthetic content as rich as, or richer, than the real world (e.g., The Matrix, Ready Player One). How close are we to this dream? Traditional computer graphics can render photorealistic 3D scenes, but cannot automatically generate detailed content. Generative models like GANs, in contrast, can create content from scratch, but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine.
In this paper, we explore the degree to which you can navigate the visual world of a GAN. Figure 1 illustrates the kinds of transformations we explore. Consider the dog at the top-left. By moving in some direction of GAN latent space, can we hallucinate walking toward this dog? As the figure indicates, and as we will show in this paper, the answer is yes. However, as we continue to zoom in, we quickly reach limits. Once the dog face fills the full frame, continuing to walk in this direction fails to increase the zoom. A similar effect occurs in the daisy example (row 2 of Fig. 1), where a direction in latent space moves the daisy up and down, but cannot move it out of frame.
We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained. For example, if the training dataset consists of centered dogs and daises, the same may be the case in GAN-generated images. Nonetheless, we find that some degree of transformation is possible. When and why can we achieve certain transformations but not others?
This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space. In other words, are GANs “steerable” in latent space?1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations. From our experiments, it is possible to shift the distribution of generated images to some degree, but we cannot extrapolate entirely out of the dataset’s support. In particular, attributes can be shifted in proportion to the variability of that attribute in the training data. We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction, together with data augmentation on training images. One of the current criticisms of generative models is that they simply interpolate between datapoints, and fail to generate anything truly new, but our results add nuance to this story. It is possible to achieve distributional shift, but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary.
Our main findings are:
• A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space. These walks are learned in self-supervised manner without labeled attributes or distinct source and target images.
• The linear walk is as effective as more complex non-linear walks, suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so.
• The extent of each transformation is limited, and we quantify a relationship between dataset variability and how much we can shift the model distribution.
• The transformations are a general-purpose framework that work with different model architectures, e.g. BigGAN, StyleGAN, and DCGAN, and illustrate different disentanglement properties in their respective latent spaces.
• Data augmentation improves steerability, as does jointly training the walk trajectory and the generator weights, which allows us to achieve larger transformation effects.
2 RELATED WORK
Latent space manipulations can be seen from several perspectives – how we achieve it, what limits it, and what it enables us to do. Our work addresses these three aspects together, and we briefly refer to each one in related work.
Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes, such as smile-vectors and gender-vectors for faces (Radford et al., 2015; Karras et al., 2018). However these manipulations are not exclusive to GANs; in flow-based generative models, linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target (Kingma & Dhariwal, 2018). Möllenhoff & Cremers (2019) proposes a modified GAN formulation by treating data
1We use the term “steerable” in analogy to the classic steerable filters of Freeman & Adelson (1991).
as directional k-currents, where moving along tangent planes naturally corresponds to interpretable manipulations. Upchurch et al. (2017) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier, again using feature mappings of source and target sets to determine an edit direction. Unlike these approaches, we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images. Instead, we learn to approximate editing operations on individual source images. We find that linear trajectories in latent space can capture simple image manipulations, e.g., zoom-vectors and shift-vectors, although we also obtain similar results using nonlinear trajectories.
Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models (Torralba & Efros, 2011; Geirhos et al., 2018; Amini et al.). Dataset biases partly comes from human preferences in taking photos: we tend to take pictures in specific “canonical” views that are not fully representative of the entire visual world (Mezuman & Weiss, 2012; Jahanian et al., 2015). Consequently, models trained with these datasets inherit their biases. This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers (Geirhos et al., 2018) – and in turn limits their generalization performance on similar objectives (Azulay & Weiss, 2018). Our latent space trajectories transform the output corresponding to various image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the data’s support.
Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation (Brock et al., 2018; Karras et al., 2018), including applications that enable users to fine-tune the generated output (Simon; Zhu et al., 2016; Bau et al., 2018). A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space. We further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.
Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications. For example, Denton et al. (2019) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. Shen et al. (2019) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement of the latent space. White (2016) suggests approaches to improve the learned manipulations, such as using spherical linear interpolations, resampling images to remove biases in attribute vectors, and using data augmentation as a synthetic attribute for variational autoencoders. Goetschalckx et al. (2019) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks parametrized by neural networks for editing operations.
3 METHOD
Generative models such as GANs (Goodfellow et al., 2014) learn a mapping function G such that G : z → x. Here, z is the latent code drawn from a Gaussian density and x is an output, e.g., an image. Our goal is to achieve transformations in the output space by moving in latent space, as shown in Fig. 2. In general, this goal also captures the idea in equivariance, in which transformations in the input space result in equivalent transformations in the output space (c.f. Hinton et al. (2011); Cohen et al. (2019); Lenc & Vedaldi (2015)).
Objective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation. The vector is multiplied with continuous parameter α which signifies the step size: large α values correspond to a greater degree of transformation, while small α values correspond to a lesser degree. Formally, we learn the walk w by minimizing the objective function:
w∗ = arg min w Ez,α[L(G(z+αw),edit(G(z), α))]. (1)
Here, L measures the distance between the generated image after taking an α-step in the latent direction G(z + αw) and the target edit(G(z), α) derived from the source image G(z). We use L2 loss as our objective L, however we also obtain similar results when using the LPIPS perceptual image similarity metric (Zhang et al., 2018) (see Appendix B.4.1). Note that we can learn this walk in a fully self-supervised manner – we perform the edit(·) operation on an arbitrary generated image and subsequently the vector to minimize the objective. Let model(α) denote the optimized transformation vector w∗ with the step size α, defined as model(α) = G(z + αw∗).
The previous setup assumes linear latent space walks, but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position. For the non-linear walk, we learn a function, f∗(z), which corresponds to a small -step transformation edit(G(z), ). To achieve bigger transformations, we apply f recursively, mimicking discrete Euler ODE approximations. Formally, for a fixed , we minimize
L = Ez,n[||G(fn(z))− edit(G(z), n ))||], (2) where fn(·) is an nth-order function composition f(f(f(...))), and f(z) is parametrized with a neural network. We discuss further implementation details in Appendix A.4. We use this function composition approach rather than the simpler setup of G(z + αNN(z)) because the latter learns to ignore the input z when α takes on continuous values, and is thus equivalent to the previous linear trajectory (see Appendix A.3 for further details).
Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation. To this end, we compare the distribution of a given attribute, e.g., “luminance”, in the dataset versus in images generated after walking in latent space.
For color transformations, we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel. To estimate the color distribution of model-generated images, we randomly sample N = 100 pixels per image both before and after taking a step in latent space. Then, we compute the pixel value for each channel, or the mean RGB value for luminance, and normalize the range between 0 and 1.
For zoom and shift transformations, we rely on an object detector which captures the central object in the image class. We use a MobileNet-SSD v1 (Liu et al., 2016) detector to estimate object bounding boxes, and average over image classes recognizable by the detector. For each successful detection, we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation. For the zoom operation, we use the area of the bounding box normalized by the area of the total image. For shift in the X and Y directions, we take the center X and Y coordinates of the bounding box, and normalize by image width or height.
Truncation parameters in GANs (as used in Brock et al. (2018); Karras et al. (2018)) trade off between the diversity of the generated images and sample quality. When comparing generated images to the dataset distribution, we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training (see Brock et al. (2018)). When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset, we reduce truncation to 0.5 to ensure better performance of the object detector.
Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model, thus keeping the model weights fixed. The previous approach allows us
to understand the latent space organization and limitations in the model’s transformation capacity. To overcome these limits, we explore adding data augmentation by editing the training images with each corresponding transformation, and train the generative model with this augmented dataset. We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector:
G∗, w∗ = arg min G,w (Ledit + LGAN ) , (3)
where the edit loss encourages low L2 error between learned transformation and target image:
Ledit = L2 (G(z+αw)− edit(G(z), α)) . (4) The GAN loss optimizes for discriminator error:
LGAN = max D (Ez,α[D(G(z+αw))]− Ex,α[D(edit(x, α))]) , (5)
where we draw images x from the training dataset and perform data augmentation by applying the edit operation on them. This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths, and when combined with data augmentation, results in larger transformation ranges which we demonstrate in Sec. 4.4
4 EXPERIMENTS
We demonstrate our approach using BigGAN (Brock et al., 2018), a class-conditional GAN trained on 1000 ImageNet categories. We learn a shared latent space walk by averaging across the image categories, and further quantify how this walk affects each class differently. We focus on linear walks in latent space for the main text, and show additional results on nonlinear walks in Sec. 4.3 and Appendix B.4.2. We also conduct experiments on StyleGAN (Karras et al., 2018), which uses an unconditional style-based generator architecture in Sec. 4.3 and Appendix B.5.
4.1 WHAT IMAGE TRANSFORMATIONS CAN WE ACHIEVE IN LATENT SPACE?
We show qualitative results of the learned transformations in Fig. 1. By steering in the generator latent space, we learn a variety of transformations on a given source image (shown in the center panel of each transformation). Interestingly, several priors come into play when learning these image transformations. When we shift a daisy downwards in the Y direction, the model hallucinates that the sky exists on the top of the image. However, when we shift the daisy up, the model inpaints the remainder of the image with grass. When we alter the brightness of a image, the model transitions between nighttime and daytime. This suggests that the model can extrapolate from the original source image, and still remain consistent with the image context.
However, when we increase the step size of α, we observe that the degree to which we can achieve each transformation is limited. In Fig. 3 we observe two potential failure cases: one in which the the image becomes unrealistic, and the other in which the image fails to transform any further. When we try to zoom in on a Persian cat, we observe that the cat no longer increases in size beyond some point, and in fact consistently undershoots the target zoom. On the other hand, when we try to zoom out on the cat, we observe that it begins to fall off the image manifold, and does not become any smaller after some point. Indeed, the perceptual distance (using LPIPS) between images decreases as we push α towards the transformation limits. Similar trends hold with other transformations: we are able to shift a lorikeet up and down to some degree until the transformation yields unrealistic output, and despite adjusting α on the rotation vector, we are unable to rotate a pizza. Are the limitations to these transformations governed by the training dataset? In other words, are our latent space walks limited because in ImageNet photos the cats are mostly centered and taken within a certain size? We seek to investigate and quantify these biases in the next sections.
An intriguing characteristic of the learned trajectory is that the amount it affects the output depends on the image class. In Fig. 4, we investigate the impact of the walk for different image categories under color transformations. By moving in the direction of a redness vector, we are able to successfully recolor a jellyfish, but we are unable to change the color of a goldfinch, which remains yellow which slight changes in background textures. Likewise, increasing brightness changes an erupting volcano to a dormant one, but does not have much effect on Alps, which only transitions between night and day. In the third example, we use our latent walk to turn red sports cars to blue, but it cannot recolor firetrucks. Again, perceptual distance over image samples confirms these qualitative observations: a 2-sample t-test yields t = 20.77, p < 0.001 for jellyfish/goldfinch, t = 8.14, p < 0.001 for volcano/alp, and t = 6.84, p < 0.001 for sports car/fire engine. We hypothesize that the different impact of the shared transformation on separate image classes relates to the variability in the underlying dataset. The overwhelming majority of firetrucks are red2, but sports cars appear in a variety of colors. Therefore, our color transformation is constrained by the dataset biases of individual classes.
With shift, we can move the distribution of the center object by varying α. In the underlying model, the center coordinate of the object is most concentrated at half of the image width and height, but after applying the shift in X and shift in Y transformation, the mode of the transformed distribution varies between 0.3 and 0.7 of the image width/height. To quantify the distribution changes, we compute the area of intersection between the original model distribution and the distribution after applying each transformation and observe that the intersection decreases as we increase or decrease the magnitude of α. However, our transformations are limited to a certain extent – if we increase α
2but apparently blue fire trucks do exist! (DiGrazia, 2019)
beyond 150 pixels for vertical shifts, we start to generate unrealistic images, as evidenced by a sharp rise in FID and converging modes in the transformed distributions (Fig. 5 columns 2 & 3).
We perform a similar procedure for zoom, by measuring the area of the bounding box for the detected object under different magnitudes of α. Like shift, we observe that subsequent increases in α magnitude start to have smaller and smaller effects on the mode of the resulting distribution (Fig. 5 last column). Past an 8x zoom in or out, we observe an increase in the FID signifying decreasing image quality. Interestingly for zoom, the FID under zooming in and zooming out is anti-symmetric, indicating that how well we can zoom-in and retain realisitic images differs from that of zooming out. These trends are consistent with the plateau in transformation behavior that we qualitatively observe in Fig. 3. Although we can arbitrarily increase the α step size, after some point we are unable to achieve further transformation and risk deviating from the natural image manifold.
4.2 HOW DOES THE DATA AFFECT THE TRANSFORMATIONS?
Is the extent to which we can transform each class, as we observed in Fig. 4, due to limited variability in the underlying dataset for each class? One way of quantifying this is to measure the difference in transformed model means, model(+α) and model(-α), and compare it to the spread of the dataset distribution. For each class, we compute standard deviation of the dataset with respect to our statistic of interest (pixel RGB value for color, and bounding box area and center value for zoom and shift transformations respectively). We hypothesize that if the amount of transformation is biased depending on the image class, we will observe a correlation between the distance of the mean shifts and the standard deviation of the data distribution.
More concretely, we define the change in model means under a given transformation as:
∆µk = µk,model(+α∗) − µk,model(-α∗) (6) for a given class k and we set α∗ to be largest and smallest α values used in training. The degree to which we achieve each transformation is a function of α, so we use the same α value for all classes – one that is large enough to separate the means of µk,model(α∗) and µk,model(-α∗) under
transformation, but also for which the FID of the generated distribution remains below a threshold T of generating reasonably realistic images (for our experiments we use T = 22).
In Fig. 6 we plot the standard deviation σ of the dataset on the x-axis, and the model ∆µ under a +α∗ and −α∗ transformation on the y-axis, as defined in Eq. 6. We sample randomly from 100 classes for the color, zoom and shift transformations, and generate 200 samples of each class under the positive and negative transformations. We use the same setup of drawing samples from the model and dataset and computing the statistics for each transformation as described in Sec. 4.1.
Indeed, we find that the width of the dataset distribution, captured by the standard deviation of random samples drawn from the dataset for each class, relates to how much we can transform. There is a positive correlation between the spread of the dataset and the magnitude of ∆µ observed in the transformed model distributions, and the slope of all observed trends differs significantly from zero (p < 0.001 for all transformations). For the zoom transformation, we show examples of two extremes along the trend. For the “robin” class the spread σ in the dataset is low, and subsequently, the separation ∆µ that we are able to achieve by applying +α∗ and −α∗ transformations is limited. On the other hand, for “laptops”, the dataset spread is broad; ImageNet contains images of laptops of various sizes, and we are able to attain wider shifts in the model distribution.
From these results, we conclude that the amount of transformation we can achieve relates to the dataset variability. Consistent with our qualitative observations in Fig. 4, we find that if the images for a particular class have adequate coverage over the entire range of a given transformation, then we are better able to move the model distribution to both extremes. On the other hand, if the images for a given class are less diverse, the transformation is limited by this dataset bias.
4.3 ALTERNATIVE ARCHITECTURES AND WALKS
We ran an identical set of experiments using the nonlinear walk in the BigGAN latent space (Eq 2) and obtained similar quantitative results. To summarize, the Pearson’s correlation coefficient between dataset σ and model ∆µ for linear walks and nonlinear walks is shown in Table 1, and full results in Appendix B.4.2. Qualitatively, we observe that while the linear trajectory undershoots the targeted level of transformation, it is able to preserve more realistic-looking results (Fig. 7). The
transformations involve a trade-off between minimizing the loss and maintaining realistic output, and we hypothesize that the linear walk functions as an implicit regularizer that corresponds well with the inherent organization of the latent space.
To test the generality of our findings across model architecture, we ran similar experiments on StyleGAN, in which the latent space is divided into two spaces, z and W . As Karras et al. (2018) notes that the W space is less entangled than z, we apply the linear walk to W and show results in Fig. 8 and Appendix B.5. One interesting aspect of StyleGAN is that we can change color while leaving other structure in the image unchanged. In other words, while green faces do not naturally exist in the dataset, the StyleGAN model is still able to generate them. This differs from the behavior of BigGAN, where changing color results in different semantics in the image, e.g., turning a dormant volcano to an active one. StyleGAN, however, does not preserve the exact geometry of objects under other transformations, e.g., zoom and shift (see Appendix B.5).
4.4 TOWARDS STEERABLE GANS
So far, we have frozen the parameters of the generative model when learning a latent space walk for image editing, and observe that the transformations are limited by dataset bias. Here we investigate approaches to overcome these limitations and increase model steerability. For these experiments, we use a class-conditional DCGAN model (Radford et al., 2015) trained on MNIST digits (LeCun, 1998).
To study the effect of dataset biases, we train (1) a vanilla DCGAN and (2) a DCGAN with data augmentation, and then learn the optimal walk in Eq. 1 after the model has been trained – we refer to these two approaches in Fig. 9 as argmin W and argmin W + aug, respectively. We observe that adding data augmentation yields transformations that better approximate the target image and
attain lower L2 error than the vanilla DCGAN (blue and orange curves in Fig. 9). Qualitatively, we observe that transformations using the vanilla GAN (argmin W) become patchy and unrealistic as we increase the magnitude of α, but when the model is trained with data augmentation (argmin W + aug), the digits retain their structural integrity.
Rather than learning the walk vector w assuming a frozen generator, we may also jointly optimize the model and linear walk parameter together, as we formalized in Eq. 3. This allows the model to learn an equivariance between linear directions in the latent space and the corresponding image transformations. We refer to this model as argmin G,W in Fig. 9. Compared to the frozen generator (in argmin W and argmin W + aug), the joint objective further decreases L2 error (green curve in Fig. 9). We show additional qualitative examples in Appendix B.8. The steerable range of the generator increases with joint optimization and data augmentation, which provides additional evidence that training data bias impacts the models’ steerability and generalization capacity. We tried DCGAN on CIFAR10 as a more complicated dataset, however were unable to get steering to be effective – all three methods failed to produce realistic transformations and joint training in fact performed the worst. Finding the right steering implementation per GAN and dataset, especially for joint training, may be a difficult problem and an interesting direction for future work.
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Rotate 2D
0.5 0.0 0.5 log(↵)
0.0
0.1
0.2
0.3
L 2
E rr
or
Zoom
5 0 5 ↵
0.0
0.1
0.2 0.3 L 2 E rr or
Shift X
argmin W argmin W + aug
argmin G,W
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Figure 9: Reducing the effect of transformation limits. Using a DCGAN model on MNIST digits, we compare the L2 reconstruction errors on latent space walks for models trained with vanilla GANs without (argmin W) and with data augmentation (argmin W + aug). We also compare to jointly optimizing the generator and the walk parameters with data augmentation (argmin G,W), which achieves the lowest L2 error.
5 CONCLUSION
GANs are powerful generative models, but are they simply replicating the existing training datapoints, or can they to generalize beyond the training distribution? We investigate this question by exploring walks in the latent space of GANs. We optimize trajectories in latent space to reflect simple image transformations in the generated output, learned in a self-supervised manner. We find that the model is able to exhibit characteristics of extrapolation – we are able to “steer” the generated output to simulate camera zoom, horizontal and vertical movement, camera rotations, and recolorization. However, our ability to naively move the distribution is finite: we can transform images to some degree but cannot extrapolate entirely outside the support of the training data. To increase model steerability, we add data augmentation during training and jointly optimize the model and walk trajectory. Our experiments illustrate the connection between training data bias and the resulting distribution of generated images, and suggest methods for extending the range of images that the models are able to create.
ACKNOWLEDGEMENTS
We would like to thank Quang H Le, Lore Goetschalckx, Alex Andonian, David Bau, and Jonas Wulff for helpful discussions. This work was supported by a Google Faculty Research Award to P.I., and a U.S. National Science Foundation Graduate Research Fellowship to L.C.
A METHOD DETAILS
A.1 OPTIMIZATION FOR THE LINEAR WALK
We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space z. We share the vector w across all ImageNet categories for the BigGAN model.
A.2 IMPLEMENTATION DETAILS FOR LINEAR WALK
We experiment with a number of different transformations learned in the latent space, each corresponding to a different walk vector. Each of these transformations can be learned without any direct supervision, simply by applying our desired edit to the source image. Furthermore, the parameter α allows us to vary the extent of the transformation. We found that a slight modification to each transformation improved the degree to which we were able to steer the output space: we scale α differently for the learned transformation G(z + αgw), and the target edit edit(G(z), αt). We detail each transformation below:
Shift. We learn transformations corresponding to shifting an image in the horizontal X direction and the vertical Y direction. We train on source images that are shifted −αt pixels to the left and αt pixels to the right, where we set αt to be between zero and one-half of the source image width or height D. When training the walk, we enforce that the αg parameter ranges between -1 and 1; thus for a random shift by t pixels, we use the value αg = αt/D. We apply a mask to the shifted image, so that we only apply the loss function on the visible portion of the source image. This forces the generator to extrapolate on the obscured region of the target image.
Zoom. We learn a walk which is optimized to zoom in and out up to four times the original image. For zooming in, we crop the central portion of the source image by some αt amount, where 0.25 < αt < 1 and resize it back to its original size. To zoom out, we downsample the image by αt where 1 < αt < 4. To allow for both a positive and negative walk direction, we set αg = log(αt). Similar to shift, a mask applied during training allows the generator to inpaint the background scene.
Color. We implement color as a continuous RGB slider, e.g., a 3-tuple αt = (αR, αG, αB), where each αR, αG, αB can take values between [−0.5, 0.5] in training. To edit the source image, we simply add the corresponding αt values to each of the image channels. Our latent space walk is parameterized as z + αgw = z + αRwR + αGwG + αBwB where we jointly learn the three walk directions wR, wG, and wB .
Rotate in 2D. Rotation in 2D is trained in a similar manner as the shift operations, where we train with −45 ≤ αt ≤ 45 degree rotation. Using R = 45, scale αg = αt/R. We use a mask to enforce the loss only on visible regions of the target.
Rotate in 3D. We simulate a 3D rotation using a perspective transformation along the Z-axis, essentially treating the image as a rotating billboard. Similar to the 2D rotation, we train with −45 ≤ αt ≤ 45 degree rotation, we scale αg = αt/R where R = 45, and apply a mask during training.
A.3 LINEAR NN(z) WALK
Rather than defining w as a vector in z space (Eq. 1), one could define it as a function that takes a z as input and maps it to the desired z′ after taking a variable-sized step α in latent space. In this case, we may parametrize the walk with a neural network w = NN(z), and transform the image using G(z + αNN(z)). However, as we show in the following proof, this idea will not learn to let w be a function of z.
Proof. For simplicity, let w = F (z). We optimize for J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] where α is an arbitrary scalar value. Note that for the target image, two equal edit operations is equivalent to performing a single edit of twice the size (e.g., shifting by 10px the same as shifting by 5px twice; zooming by 4x is the same as zooming by 2x twice). That is,
edit(G(z), 2α) = edit(edit(G(z), α), α).
To achieve this target, starting from an initial z, we can take two steps of size α in latent space as follows:
z1 = z + αF (z)
z2 = z1 + αF (z1)
However, because we let α take on any scalar value during optimization, our objective function enforces that starting from z and taking a step of size 2α equals taking two steps of size α:
z + 2αF (z) = z1 + αF (z1) (7)
Therefore: z + 2αF (z) = z + αF (z) + αF (z1)⇒
αF (z) = αF (z1)⇒ F (z) = F (z1).
Thus F (·) simply becomes a linear trajectory that is independent of the input z.
A.4 OPTIMIZATION FOR THE NON-LINEAR WALK
Given the limitations of the previous walk, we define our nonlinear walk F (z) using discrete step sizes . We define F (z) as z+NN(z), where the neural network NN learns a fixed step transformation, rather than a variable α step. We then renormalize the magnitude z. This approach mimics the Euler method for solving ODEs with a discrete step size, where we assume that the gradient of the transformation in latent space is of the form dzdt = NN(z) and we approximate zi+1 = zi + dz dt |zi . The key difference from A.3 is the fixed step size, which avoids optimizing for the equality in (7).
We use a two-layer neural network to parametrize the walk, and optimize over 20000 samples using the Adam optimizer as before. Positive and negative transformation directions are handled with two neural networks having identical architecture but independent weights. We set to achieve the same transformation ranges as the linear trajectory within 4-5 steps.
B ADDITIONAL EXPERIMENTS
B.1 MODEL AND DATA DISTRIBUTIONS
How well does the model distribution of each property match the dataset distribution? If the generated images do not form a good approximation of the dataset variability, we expect that this would also impact our ability to transform generated images. In Fig. 10 we show the attribute distributions of the BigGAN model G(z) compared to samples from the ImageNet dataset. We show corresponding results for StyleGAN and its respective datasets in Appendix B.5. While there is some bias in how well model-generated images approximate the dataset distribution, we hypothesize that additional biases in our transformations come from variability in the training data.
B.2 QUANTIFYING TRANSFORMATION LIMITS
We observe that when we increase the transformation magnitude α in latent space, the generated images become unrealistic and the transformation ceases to have further effect. We show this qualitatively in Fig. 3. To quantitatively verify this trends, we can compute the LPIPS perceptual distance of images generated using consecutive pairs of αi and αi+1. For shift and zoom transformations, perceptual distance is larger when α (or log(α) for zoom) is near zero, and decreases as the the magnitude of α increases, which indicates that large α magnitudes have a smaller transformation effect, and the transformed images appear more similar. On the other hand, color and rotate in 2D/3D exhibit a steady transformation rate as the magnitude of α increases.
Note that this analysis does not tell us how well we achieve the specific transformation, nor whether the latent trajectory deviates from natural-looking images. Rather, it tells us how much we manage to change the image, regardless of the transformation target. To quantify how well each transformation is achieved, we rely on attribute detectors such as object bounding boxes (see B.3).
B.3 DETECTED BOUNDING BOXES
To quantify the degree to which we are able to achieve the zoom and shift transformations, we rely on a pre-trained MobileNet-SSD v13 object detection model. In Fig. 12 and 13 we show the results of applying the object detection model to images from the dataset, and images generated by the model under the zoom, horizontal shift, and vertical shift transformations for randomly selected values of α, to qualitatively verify that the object detection boundaries are reasonable. Not all ImageNet images contain recognizable objects, so we only use ImageNet classes containing objects recognizable by the detector for this analysis.
B.4 ALTERNATIVE WALKS IN BIGGAN
B.4.1 LPIPS OBJECTIVE
In the main text, we learn the latent space walk w by minimizing the objective function:
J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] . (8) using a Euclidean loss for L. In Fig. 14 we show qualitative results using the LPIPS perceptual similarity metric (Zhang et al., 2018) instead of Euclidean loss. Walks were trained using the same parameters as those in the linear-L2 walk shown in the main text: we use 20k samples for training, with Adam optimizer and learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of α).
B.4.2 NON-LINEAR WALKS
Following B.4.2, we modify our objective to use discrete step sizes rather than continuous steps. We learn a function F (z) to perform this -step transformation on given latent code z, where F (z) is parametrized with a neural network. We show qualitative results in Fig. 15. We perform the same set of experiments shown in the main text using this nonlinear walk in Fig. 16. These experiments
3https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
exhibit similar trends as we observed in the main text – we are able to modify the generated distribution of images using latent space walks, and the amount to which we can transform is related to the variability in the dataset. However, there are greater increases in FID when we apply the non-linear transformation, suggesting that these generated images deviate more from natural images and look less realistic.
B.4.3 ADDITIONAL QUALITATIVE EXAMPLES
We show qualitative examples for randomly generated categories for BigGAN linear-L2, linear LPIPS, and nonlinear trajectories in Figs. 17, 18, 19 respectively.
B.5 WALKS IN STYLEGAN
We perform similar experiments for linear latent space walks using StyleGAN models trained on the LSUN cat, LSUN car, and FFHQ face datasets. As suggested by Karras et al. (2018), we learn the walk vector in the intermediate W latent space due to improved attribute disentanglement in W . We show qualitative results for color, shift, and zoom transformations in Figs. 20, 22, 24 and corresponding quantitative analyses in Figs. 21, 23, 25. We show qualitative examples for the comparison of optimizing in the W and z latent spaces in Stylegan in 28.
B.6 WALKS IN PROGRESSIVE GAN
We also experiment with the linear walk objective in the latent space of Progressive GAN Karras et al. (2017). One interesting property of the Progressive GAN interpolations is that they take much longer to train to have a visual effect – for example for color, we could obtain drastic color changes in Stylegan W latent space using as few as 2k samples, but with progressive gan, we used 60k samples and still did not obtain as strong of an effect. This points to the Stylegan w latent space being more “flexible” and generalizable for transformation, compared to the latent space of progressive GAN. Moreover, we qualitatively observe some entanglement in the progressive gan transformations – for example, changing the level of zoom also changes the lighting. We did not observe big effects in the horizontal and vertical shift transformations. Qualitative examples and quantitative results are shown in Figs. 26, 27.
B.7 QUALITATIVE EXAMPLES FOR ADDITIONAL TRANSFORMATIONS
Since the color transformation operates on individual pixels, we can optimize the walk using a segmented target – for example when learning a walk for cars, we only modify pixels in segmented car region when generating edit(G(z), α). StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within theW latent space (Fig. 29 left) as also noted in Karras et al. (2018); Shen et al. (2019). We also show qualitative results for adjust image contrast (Fig. 29 right), and for combining zoom, shift X, and shift Y transformations (Fig. 30).
B.8 ADDITIONAL RESULTS FOR IMPROVING MODEL STEERABILITY
We further test the hypothesis that dataset variability impacts the amount we are able to transform by comparing DCGAN models trained with and without data augmentation. Namely, with data augmentation, the discriminator is able to see edited versions of the real images. We also jointly train the model and the walk trajectory which encourages the model to learn linear walks. For zoom, horizontal shift, and 2D rotate transformations, additional samples for three training approaches – without data augmentation, with data augmentation, and joint optimization – appear in Fig. 31-33. Qualitatively, transformations using the model trained without data augmentation degrade the digit structure as α magnitude increases, and may even change one digit to another. Training with data augmentation and joint optimization better preserves digit structure and identity.
ZoomShift YShift XLuminance
Luminance
Rotate 2D
Shift X Shift Y
Zoom
1.0 0.5 0.0 0.5 1.0 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6 0.8 P er ce p tu al D is ta n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
Rotate 3D
Luminance Shift X Shift Y Zoom
Luminance Shift X Shift Y Zoom | 1. What is the focus of the paper, and what are the key contributions regarding attribute manipulation in latent space?
2. What are the strengths of the proposed approach, particularly in terms of its ability to improve "steerability"?
3. What are some of the limitations or potential improvements regarding the techniques used to enhance the range of possible attribute manipulations?
4. Can the authors provide additional explanations or clarifications regarding certain aspects of their methodology, such as evaluating color distribution on a sampled subset of pixels or the outliers in the transformation limitation/data variability plots?
5. Have the authors considered exploring other approaches to manipulate attributes using different spaces, such as manipulating location-based attributes by learning walks in the W space? If so, what were the results? | Review | Review
The paper explores and experiments on extrapolating attributes of images produced by GANs by manipulating their representations in latent space. Attribute manipulation is done by predicting latent space walks (linear or non-linear) and is learned in a self-supervised way by using augmented outputs of a pretrained GAN as target images.
Authors experimentally show dependence of range of possible attribute manipulations on the diversity of the dataset in terms of that attribute as well as propose techniques to improve it.
Suggested concepts are explained in a clear way with extensive experiments confirming the findings. Techniques proposed for improving "steerability" of GANs are backed up by both qualitative and quantitative analysis, although missing experiments on more sophisticated datasets than MNIST.
Overall, I recommend to accept this paper.
Several questions I would like the authors to address to make some details more clear and the paper more complete:
1. Why the color distribution of generated images is evaluated on a sampled subset of pixels, not full images? ("Quantifying steerability" section.)
2. On Figure 6, which classes are outlying on transformation limitation / data variability plots (bottom-right corner) and how it may be explained?
3. While StyleGAN can not preserve geometry of objects for shift in location-based attributes, when walks are learned in the W space, have you experimented on manipulating those attributes with z space? What are the results?
Other minor flaws include
1. Pictures in Fig. 2 are mixed up between G(z) and G(z + \alpha w)
2. In Fig. 2 edit(G(z, \alpha)) -> edit(G(z), \alpha))
3. In eq. (2) f^n(z) -> G(f^n(z))
4. In eq. (6) +\alpha^* -> -\alpha^* |
ICLR | Title
On the "steerability" of generative adversarial networks
Abstract
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise – these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by “steering” in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
1 INTRODUCTION
The quality of deep generative models has increased dramatically over the past few years. When introduced in 2014, Generative Adversarial Networks (GANs) could only synthesize MNIST digits and low-resolution grayscale faces (Goodfellow et al., 2014). The most recent models, however, produce diverse high-resolution images that are often indistinguishable from natural photos (Brock et al., 2018; Karras et al., 2018).
Science fiction has long dreamed of virtual realities filled of synthetic content as rich as, or richer, than the real world (e.g., The Matrix, Ready Player One). How close are we to this dream? Traditional computer graphics can render photorealistic 3D scenes, but cannot automatically generate detailed content. Generative models like GANs, in contrast, can create content from scratch, but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine.
In this paper, we explore the degree to which you can navigate the visual world of a GAN. Figure 1 illustrates the kinds of transformations we explore. Consider the dog at the top-left. By moving in some direction of GAN latent space, can we hallucinate walking toward this dog? As the figure indicates, and as we will show in this paper, the answer is yes. However, as we continue to zoom in, we quickly reach limits. Once the dog face fills the full frame, continuing to walk in this direction fails to increase the zoom. A similar effect occurs in the daisy example (row 2 of Fig. 1), where a direction in latent space moves the daisy up and down, but cannot move it out of frame.
We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained. For example, if the training dataset consists of centered dogs and daises, the same may be the case in GAN-generated images. Nonetheless, we find that some degree of transformation is possible. When and why can we achieve certain transformations but not others?
This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space. In other words, are GANs “steerable” in latent space?1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations. From our experiments, it is possible to shift the distribution of generated images to some degree, but we cannot extrapolate entirely out of the dataset’s support. In particular, attributes can be shifted in proportion to the variability of that attribute in the training data. We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction, together with data augmentation on training images. One of the current criticisms of generative models is that they simply interpolate between datapoints, and fail to generate anything truly new, but our results add nuance to this story. It is possible to achieve distributional shift, but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary.
Our main findings are:
• A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space. These walks are learned in self-supervised manner without labeled attributes or distinct source and target images.
• The linear walk is as effective as more complex non-linear walks, suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so.
• The extent of each transformation is limited, and we quantify a relationship between dataset variability and how much we can shift the model distribution.
• The transformations are a general-purpose framework that work with different model architectures, e.g. BigGAN, StyleGAN, and DCGAN, and illustrate different disentanglement properties in their respective latent spaces.
• Data augmentation improves steerability, as does jointly training the walk trajectory and the generator weights, which allows us to achieve larger transformation effects.
2 RELATED WORK
Latent space manipulations can be seen from several perspectives – how we achieve it, what limits it, and what it enables us to do. Our work addresses these three aspects together, and we briefly refer to each one in related work.
Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes, such as smile-vectors and gender-vectors for faces (Radford et al., 2015; Karras et al., 2018). However these manipulations are not exclusive to GANs; in flow-based generative models, linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target (Kingma & Dhariwal, 2018). Möllenhoff & Cremers (2019) proposes a modified GAN formulation by treating data
1We use the term “steerable” in analogy to the classic steerable filters of Freeman & Adelson (1991).
as directional k-currents, where moving along tangent planes naturally corresponds to interpretable manipulations. Upchurch et al. (2017) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier, again using feature mappings of source and target sets to determine an edit direction. Unlike these approaches, we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images. Instead, we learn to approximate editing operations on individual source images. We find that linear trajectories in latent space can capture simple image manipulations, e.g., zoom-vectors and shift-vectors, although we also obtain similar results using nonlinear trajectories.
Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models (Torralba & Efros, 2011; Geirhos et al., 2018; Amini et al.). Dataset biases partly comes from human preferences in taking photos: we tend to take pictures in specific “canonical” views that are not fully representative of the entire visual world (Mezuman & Weiss, 2012; Jahanian et al., 2015). Consequently, models trained with these datasets inherit their biases. This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers (Geirhos et al., 2018) – and in turn limits their generalization performance on similar objectives (Azulay & Weiss, 2018). Our latent space trajectories transform the output corresponding to various image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the data’s support.
Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation (Brock et al., 2018; Karras et al., 2018), including applications that enable users to fine-tune the generated output (Simon; Zhu et al., 2016; Bau et al., 2018). A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space. We further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.
Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications. For example, Denton et al. (2019) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. Shen et al. (2019) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement of the latent space. White (2016) suggests approaches to improve the learned manipulations, such as using spherical linear interpolations, resampling images to remove biases in attribute vectors, and using data augmentation as a synthetic attribute for variational autoencoders. Goetschalckx et al. (2019) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks parametrized by neural networks for editing operations.
3 METHOD
Generative models such as GANs (Goodfellow et al., 2014) learn a mapping function G such that G : z → x. Here, z is the latent code drawn from a Gaussian density and x is an output, e.g., an image. Our goal is to achieve transformations in the output space by moving in latent space, as shown in Fig. 2. In general, this goal also captures the idea in equivariance, in which transformations in the input space result in equivalent transformations in the output space (c.f. Hinton et al. (2011); Cohen et al. (2019); Lenc & Vedaldi (2015)).
Objective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation. The vector is multiplied with continuous parameter α which signifies the step size: large α values correspond to a greater degree of transformation, while small α values correspond to a lesser degree. Formally, we learn the walk w by minimizing the objective function:
w∗ = arg min w Ez,α[L(G(z+αw),edit(G(z), α))]. (1)
Here, L measures the distance between the generated image after taking an α-step in the latent direction G(z + αw) and the target edit(G(z), α) derived from the source image G(z). We use L2 loss as our objective L, however we also obtain similar results when using the LPIPS perceptual image similarity metric (Zhang et al., 2018) (see Appendix B.4.1). Note that we can learn this walk in a fully self-supervised manner – we perform the edit(·) operation on an arbitrary generated image and subsequently the vector to minimize the objective. Let model(α) denote the optimized transformation vector w∗ with the step size α, defined as model(α) = G(z + αw∗).
The previous setup assumes linear latent space walks, but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position. For the non-linear walk, we learn a function, f∗(z), which corresponds to a small -step transformation edit(G(z), ). To achieve bigger transformations, we apply f recursively, mimicking discrete Euler ODE approximations. Formally, for a fixed , we minimize
L = Ez,n[||G(fn(z))− edit(G(z), n ))||], (2) where fn(·) is an nth-order function composition f(f(f(...))), and f(z) is parametrized with a neural network. We discuss further implementation details in Appendix A.4. We use this function composition approach rather than the simpler setup of G(z + αNN(z)) because the latter learns to ignore the input z when α takes on continuous values, and is thus equivalent to the previous linear trajectory (see Appendix A.3 for further details).
Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation. To this end, we compare the distribution of a given attribute, e.g., “luminance”, in the dataset versus in images generated after walking in latent space.
For color transformations, we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel. To estimate the color distribution of model-generated images, we randomly sample N = 100 pixels per image both before and after taking a step in latent space. Then, we compute the pixel value for each channel, or the mean RGB value for luminance, and normalize the range between 0 and 1.
For zoom and shift transformations, we rely on an object detector which captures the central object in the image class. We use a MobileNet-SSD v1 (Liu et al., 2016) detector to estimate object bounding boxes, and average over image classes recognizable by the detector. For each successful detection, we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation. For the zoom operation, we use the area of the bounding box normalized by the area of the total image. For shift in the X and Y directions, we take the center X and Y coordinates of the bounding box, and normalize by image width or height.
Truncation parameters in GANs (as used in Brock et al. (2018); Karras et al. (2018)) trade off between the diversity of the generated images and sample quality. When comparing generated images to the dataset distribution, we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training (see Brock et al. (2018)). When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset, we reduce truncation to 0.5 to ensure better performance of the object detector.
Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model, thus keeping the model weights fixed. The previous approach allows us
to understand the latent space organization and limitations in the model’s transformation capacity. To overcome these limits, we explore adding data augmentation by editing the training images with each corresponding transformation, and train the generative model with this augmented dataset. We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector:
G∗, w∗ = arg min G,w (Ledit + LGAN ) , (3)
where the edit loss encourages low L2 error between learned transformation and target image:
Ledit = L2 (G(z+αw)− edit(G(z), α)) . (4) The GAN loss optimizes for discriminator error:
LGAN = max D (Ez,α[D(G(z+αw))]− Ex,α[D(edit(x, α))]) , (5)
where we draw images x from the training dataset and perform data augmentation by applying the edit operation on them. This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths, and when combined with data augmentation, results in larger transformation ranges which we demonstrate in Sec. 4.4
4 EXPERIMENTS
We demonstrate our approach using BigGAN (Brock et al., 2018), a class-conditional GAN trained on 1000 ImageNet categories. We learn a shared latent space walk by averaging across the image categories, and further quantify how this walk affects each class differently. We focus on linear walks in latent space for the main text, and show additional results on nonlinear walks in Sec. 4.3 and Appendix B.4.2. We also conduct experiments on StyleGAN (Karras et al., 2018), which uses an unconditional style-based generator architecture in Sec. 4.3 and Appendix B.5.
4.1 WHAT IMAGE TRANSFORMATIONS CAN WE ACHIEVE IN LATENT SPACE?
We show qualitative results of the learned transformations in Fig. 1. By steering in the generator latent space, we learn a variety of transformations on a given source image (shown in the center panel of each transformation). Interestingly, several priors come into play when learning these image transformations. When we shift a daisy downwards in the Y direction, the model hallucinates that the sky exists on the top of the image. However, when we shift the daisy up, the model inpaints the remainder of the image with grass. When we alter the brightness of a image, the model transitions between nighttime and daytime. This suggests that the model can extrapolate from the original source image, and still remain consistent with the image context.
However, when we increase the step size of α, we observe that the degree to which we can achieve each transformation is limited. In Fig. 3 we observe two potential failure cases: one in which the the image becomes unrealistic, and the other in which the image fails to transform any further. When we try to zoom in on a Persian cat, we observe that the cat no longer increases in size beyond some point, and in fact consistently undershoots the target zoom. On the other hand, when we try to zoom out on the cat, we observe that it begins to fall off the image manifold, and does not become any smaller after some point. Indeed, the perceptual distance (using LPIPS) between images decreases as we push α towards the transformation limits. Similar trends hold with other transformations: we are able to shift a lorikeet up and down to some degree until the transformation yields unrealistic output, and despite adjusting α on the rotation vector, we are unable to rotate a pizza. Are the limitations to these transformations governed by the training dataset? In other words, are our latent space walks limited because in ImageNet photos the cats are mostly centered and taken within a certain size? We seek to investigate and quantify these biases in the next sections.
An intriguing characteristic of the learned trajectory is that the amount it affects the output depends on the image class. In Fig. 4, we investigate the impact of the walk for different image categories under color transformations. By moving in the direction of a redness vector, we are able to successfully recolor a jellyfish, but we are unable to change the color of a goldfinch, which remains yellow which slight changes in background textures. Likewise, increasing brightness changes an erupting volcano to a dormant one, but does not have much effect on Alps, which only transitions between night and day. In the third example, we use our latent walk to turn red sports cars to blue, but it cannot recolor firetrucks. Again, perceptual distance over image samples confirms these qualitative observations: a 2-sample t-test yields t = 20.77, p < 0.001 for jellyfish/goldfinch, t = 8.14, p < 0.001 for volcano/alp, and t = 6.84, p < 0.001 for sports car/fire engine. We hypothesize that the different impact of the shared transformation on separate image classes relates to the variability in the underlying dataset. The overwhelming majority of firetrucks are red2, but sports cars appear in a variety of colors. Therefore, our color transformation is constrained by the dataset biases of individual classes.
With shift, we can move the distribution of the center object by varying α. In the underlying model, the center coordinate of the object is most concentrated at half of the image width and height, but after applying the shift in X and shift in Y transformation, the mode of the transformed distribution varies between 0.3 and 0.7 of the image width/height. To quantify the distribution changes, we compute the area of intersection between the original model distribution and the distribution after applying each transformation and observe that the intersection decreases as we increase or decrease the magnitude of α. However, our transformations are limited to a certain extent – if we increase α
2but apparently blue fire trucks do exist! (DiGrazia, 2019)
beyond 150 pixels for vertical shifts, we start to generate unrealistic images, as evidenced by a sharp rise in FID and converging modes in the transformed distributions (Fig. 5 columns 2 & 3).
We perform a similar procedure for zoom, by measuring the area of the bounding box for the detected object under different magnitudes of α. Like shift, we observe that subsequent increases in α magnitude start to have smaller and smaller effects on the mode of the resulting distribution (Fig. 5 last column). Past an 8x zoom in or out, we observe an increase in the FID signifying decreasing image quality. Interestingly for zoom, the FID under zooming in and zooming out is anti-symmetric, indicating that how well we can zoom-in and retain realisitic images differs from that of zooming out. These trends are consistent with the plateau in transformation behavior that we qualitatively observe in Fig. 3. Although we can arbitrarily increase the α step size, after some point we are unable to achieve further transformation and risk deviating from the natural image manifold.
4.2 HOW DOES THE DATA AFFECT THE TRANSFORMATIONS?
Is the extent to which we can transform each class, as we observed in Fig. 4, due to limited variability in the underlying dataset for each class? One way of quantifying this is to measure the difference in transformed model means, model(+α) and model(-α), and compare it to the spread of the dataset distribution. For each class, we compute standard deviation of the dataset with respect to our statistic of interest (pixel RGB value for color, and bounding box area and center value for zoom and shift transformations respectively). We hypothesize that if the amount of transformation is biased depending on the image class, we will observe a correlation between the distance of the mean shifts and the standard deviation of the data distribution.
More concretely, we define the change in model means under a given transformation as:
∆µk = µk,model(+α∗) − µk,model(-α∗) (6) for a given class k and we set α∗ to be largest and smallest α values used in training. The degree to which we achieve each transformation is a function of α, so we use the same α value for all classes – one that is large enough to separate the means of µk,model(α∗) and µk,model(-α∗) under
transformation, but also for which the FID of the generated distribution remains below a threshold T of generating reasonably realistic images (for our experiments we use T = 22).
In Fig. 6 we plot the standard deviation σ of the dataset on the x-axis, and the model ∆µ under a +α∗ and −α∗ transformation on the y-axis, as defined in Eq. 6. We sample randomly from 100 classes for the color, zoom and shift transformations, and generate 200 samples of each class under the positive and negative transformations. We use the same setup of drawing samples from the model and dataset and computing the statistics for each transformation as described in Sec. 4.1.
Indeed, we find that the width of the dataset distribution, captured by the standard deviation of random samples drawn from the dataset for each class, relates to how much we can transform. There is a positive correlation between the spread of the dataset and the magnitude of ∆µ observed in the transformed model distributions, and the slope of all observed trends differs significantly from zero (p < 0.001 for all transformations). For the zoom transformation, we show examples of two extremes along the trend. For the “robin” class the spread σ in the dataset is low, and subsequently, the separation ∆µ that we are able to achieve by applying +α∗ and −α∗ transformations is limited. On the other hand, for “laptops”, the dataset spread is broad; ImageNet contains images of laptops of various sizes, and we are able to attain wider shifts in the model distribution.
From these results, we conclude that the amount of transformation we can achieve relates to the dataset variability. Consistent with our qualitative observations in Fig. 4, we find that if the images for a particular class have adequate coverage over the entire range of a given transformation, then we are better able to move the model distribution to both extremes. On the other hand, if the images for a given class are less diverse, the transformation is limited by this dataset bias.
4.3 ALTERNATIVE ARCHITECTURES AND WALKS
We ran an identical set of experiments using the nonlinear walk in the BigGAN latent space (Eq 2) and obtained similar quantitative results. To summarize, the Pearson’s correlation coefficient between dataset σ and model ∆µ for linear walks and nonlinear walks is shown in Table 1, and full results in Appendix B.4.2. Qualitatively, we observe that while the linear trajectory undershoots the targeted level of transformation, it is able to preserve more realistic-looking results (Fig. 7). The
transformations involve a trade-off between minimizing the loss and maintaining realistic output, and we hypothesize that the linear walk functions as an implicit regularizer that corresponds well with the inherent organization of the latent space.
To test the generality of our findings across model architecture, we ran similar experiments on StyleGAN, in which the latent space is divided into two spaces, z and W . As Karras et al. (2018) notes that the W space is less entangled than z, we apply the linear walk to W and show results in Fig. 8 and Appendix B.5. One interesting aspect of StyleGAN is that we can change color while leaving other structure in the image unchanged. In other words, while green faces do not naturally exist in the dataset, the StyleGAN model is still able to generate them. This differs from the behavior of BigGAN, where changing color results in different semantics in the image, e.g., turning a dormant volcano to an active one. StyleGAN, however, does not preserve the exact geometry of objects under other transformations, e.g., zoom and shift (see Appendix B.5).
4.4 TOWARDS STEERABLE GANS
So far, we have frozen the parameters of the generative model when learning a latent space walk for image editing, and observe that the transformations are limited by dataset bias. Here we investigate approaches to overcome these limitations and increase model steerability. For these experiments, we use a class-conditional DCGAN model (Radford et al., 2015) trained on MNIST digits (LeCun, 1998).
To study the effect of dataset biases, we train (1) a vanilla DCGAN and (2) a DCGAN with data augmentation, and then learn the optimal walk in Eq. 1 after the model has been trained – we refer to these two approaches in Fig. 9 as argmin W and argmin W + aug, respectively. We observe that adding data augmentation yields transformations that better approximate the target image and
attain lower L2 error than the vanilla DCGAN (blue and orange curves in Fig. 9). Qualitatively, we observe that transformations using the vanilla GAN (argmin W) become patchy and unrealistic as we increase the magnitude of α, but when the model is trained with data augmentation (argmin W + aug), the digits retain their structural integrity.
Rather than learning the walk vector w assuming a frozen generator, we may also jointly optimize the model and linear walk parameter together, as we formalized in Eq. 3. This allows the model to learn an equivariance between linear directions in the latent space and the corresponding image transformations. We refer to this model as argmin G,W in Fig. 9. Compared to the frozen generator (in argmin W and argmin W + aug), the joint objective further decreases L2 error (green curve in Fig. 9). We show additional qualitative examples in Appendix B.8. The steerable range of the generator increases with joint optimization and data augmentation, which provides additional evidence that training data bias impacts the models’ steerability and generalization capacity. We tried DCGAN on CIFAR10 as a more complicated dataset, however were unable to get steering to be effective – all three methods failed to produce realistic transformations and joint training in fact performed the worst. Finding the right steering implementation per GAN and dataset, especially for joint training, may be a difficult problem and an interesting direction for future work.
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Rotate 2D
0.5 0.0 0.5 log(↵)
0.0
0.1
0.2
0.3
L 2
E rr
or
Zoom
5 0 5 ↵
0.0
0.1
0.2 0.3 L 2 E rr or
Shift X
argmin W argmin W + aug
argmin G,W
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Figure 9: Reducing the effect of transformation limits. Using a DCGAN model on MNIST digits, we compare the L2 reconstruction errors on latent space walks for models trained with vanilla GANs without (argmin W) and with data augmentation (argmin W + aug). We also compare to jointly optimizing the generator and the walk parameters with data augmentation (argmin G,W), which achieves the lowest L2 error.
5 CONCLUSION
GANs are powerful generative models, but are they simply replicating the existing training datapoints, or can they to generalize beyond the training distribution? We investigate this question by exploring walks in the latent space of GANs. We optimize trajectories in latent space to reflect simple image transformations in the generated output, learned in a self-supervised manner. We find that the model is able to exhibit characteristics of extrapolation – we are able to “steer” the generated output to simulate camera zoom, horizontal and vertical movement, camera rotations, and recolorization. However, our ability to naively move the distribution is finite: we can transform images to some degree but cannot extrapolate entirely outside the support of the training data. To increase model steerability, we add data augmentation during training and jointly optimize the model and walk trajectory. Our experiments illustrate the connection between training data bias and the resulting distribution of generated images, and suggest methods for extending the range of images that the models are able to create.
ACKNOWLEDGEMENTS
We would like to thank Quang H Le, Lore Goetschalckx, Alex Andonian, David Bau, and Jonas Wulff for helpful discussions. This work was supported by a Google Faculty Research Award to P.I., and a U.S. National Science Foundation Graduate Research Fellowship to L.C.
A METHOD DETAILS
A.1 OPTIMIZATION FOR THE LINEAR WALK
We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space z. We share the vector w across all ImageNet categories for the BigGAN model.
A.2 IMPLEMENTATION DETAILS FOR LINEAR WALK
We experiment with a number of different transformations learned in the latent space, each corresponding to a different walk vector. Each of these transformations can be learned without any direct supervision, simply by applying our desired edit to the source image. Furthermore, the parameter α allows us to vary the extent of the transformation. We found that a slight modification to each transformation improved the degree to which we were able to steer the output space: we scale α differently for the learned transformation G(z + αgw), and the target edit edit(G(z), αt). We detail each transformation below:
Shift. We learn transformations corresponding to shifting an image in the horizontal X direction and the vertical Y direction. We train on source images that are shifted −αt pixels to the left and αt pixels to the right, where we set αt to be between zero and one-half of the source image width or height D. When training the walk, we enforce that the αg parameter ranges between -1 and 1; thus for a random shift by t pixels, we use the value αg = αt/D. We apply a mask to the shifted image, so that we only apply the loss function on the visible portion of the source image. This forces the generator to extrapolate on the obscured region of the target image.
Zoom. We learn a walk which is optimized to zoom in and out up to four times the original image. For zooming in, we crop the central portion of the source image by some αt amount, where 0.25 < αt < 1 and resize it back to its original size. To zoom out, we downsample the image by αt where 1 < αt < 4. To allow for both a positive and negative walk direction, we set αg = log(αt). Similar to shift, a mask applied during training allows the generator to inpaint the background scene.
Color. We implement color as a continuous RGB slider, e.g., a 3-tuple αt = (αR, αG, αB), where each αR, αG, αB can take values between [−0.5, 0.5] in training. To edit the source image, we simply add the corresponding αt values to each of the image channels. Our latent space walk is parameterized as z + αgw = z + αRwR + αGwG + αBwB where we jointly learn the three walk directions wR, wG, and wB .
Rotate in 2D. Rotation in 2D is trained in a similar manner as the shift operations, where we train with −45 ≤ αt ≤ 45 degree rotation. Using R = 45, scale αg = αt/R. We use a mask to enforce the loss only on visible regions of the target.
Rotate in 3D. We simulate a 3D rotation using a perspective transformation along the Z-axis, essentially treating the image as a rotating billboard. Similar to the 2D rotation, we train with −45 ≤ αt ≤ 45 degree rotation, we scale αg = αt/R where R = 45, and apply a mask during training.
A.3 LINEAR NN(z) WALK
Rather than defining w as a vector in z space (Eq. 1), one could define it as a function that takes a z as input and maps it to the desired z′ after taking a variable-sized step α in latent space. In this case, we may parametrize the walk with a neural network w = NN(z), and transform the image using G(z + αNN(z)). However, as we show in the following proof, this idea will not learn to let w be a function of z.
Proof. For simplicity, let w = F (z). We optimize for J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] where α is an arbitrary scalar value. Note that for the target image, two equal edit operations is equivalent to performing a single edit of twice the size (e.g., shifting by 10px the same as shifting by 5px twice; zooming by 4x is the same as zooming by 2x twice). That is,
edit(G(z), 2α) = edit(edit(G(z), α), α).
To achieve this target, starting from an initial z, we can take two steps of size α in latent space as follows:
z1 = z + αF (z)
z2 = z1 + αF (z1)
However, because we let α take on any scalar value during optimization, our objective function enforces that starting from z and taking a step of size 2α equals taking two steps of size α:
z + 2αF (z) = z1 + αF (z1) (7)
Therefore: z + 2αF (z) = z + αF (z) + αF (z1)⇒
αF (z) = αF (z1)⇒ F (z) = F (z1).
Thus F (·) simply becomes a linear trajectory that is independent of the input z.
A.4 OPTIMIZATION FOR THE NON-LINEAR WALK
Given the limitations of the previous walk, we define our nonlinear walk F (z) using discrete step sizes . We define F (z) as z+NN(z), where the neural network NN learns a fixed step transformation, rather than a variable α step. We then renormalize the magnitude z. This approach mimics the Euler method for solving ODEs with a discrete step size, where we assume that the gradient of the transformation in latent space is of the form dzdt = NN(z) and we approximate zi+1 = zi + dz dt |zi . The key difference from A.3 is the fixed step size, which avoids optimizing for the equality in (7).
We use a two-layer neural network to parametrize the walk, and optimize over 20000 samples using the Adam optimizer as before. Positive and negative transformation directions are handled with two neural networks having identical architecture but independent weights. We set to achieve the same transformation ranges as the linear trajectory within 4-5 steps.
B ADDITIONAL EXPERIMENTS
B.1 MODEL AND DATA DISTRIBUTIONS
How well does the model distribution of each property match the dataset distribution? If the generated images do not form a good approximation of the dataset variability, we expect that this would also impact our ability to transform generated images. In Fig. 10 we show the attribute distributions of the BigGAN model G(z) compared to samples from the ImageNet dataset. We show corresponding results for StyleGAN and its respective datasets in Appendix B.5. While there is some bias in how well model-generated images approximate the dataset distribution, we hypothesize that additional biases in our transformations come from variability in the training data.
B.2 QUANTIFYING TRANSFORMATION LIMITS
We observe that when we increase the transformation magnitude α in latent space, the generated images become unrealistic and the transformation ceases to have further effect. We show this qualitatively in Fig. 3. To quantitatively verify this trends, we can compute the LPIPS perceptual distance of images generated using consecutive pairs of αi and αi+1. For shift and zoom transformations, perceptual distance is larger when α (or log(α) for zoom) is near zero, and decreases as the the magnitude of α increases, which indicates that large α magnitudes have a smaller transformation effect, and the transformed images appear more similar. On the other hand, color and rotate in 2D/3D exhibit a steady transformation rate as the magnitude of α increases.
Note that this analysis does not tell us how well we achieve the specific transformation, nor whether the latent trajectory deviates from natural-looking images. Rather, it tells us how much we manage to change the image, regardless of the transformation target. To quantify how well each transformation is achieved, we rely on attribute detectors such as object bounding boxes (see B.3).
B.3 DETECTED BOUNDING BOXES
To quantify the degree to which we are able to achieve the zoom and shift transformations, we rely on a pre-trained MobileNet-SSD v13 object detection model. In Fig. 12 and 13 we show the results of applying the object detection model to images from the dataset, and images generated by the model under the zoom, horizontal shift, and vertical shift transformations for randomly selected values of α, to qualitatively verify that the object detection boundaries are reasonable. Not all ImageNet images contain recognizable objects, so we only use ImageNet classes containing objects recognizable by the detector for this analysis.
B.4 ALTERNATIVE WALKS IN BIGGAN
B.4.1 LPIPS OBJECTIVE
In the main text, we learn the latent space walk w by minimizing the objective function:
J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] . (8) using a Euclidean loss for L. In Fig. 14 we show qualitative results using the LPIPS perceptual similarity metric (Zhang et al., 2018) instead of Euclidean loss. Walks were trained using the same parameters as those in the linear-L2 walk shown in the main text: we use 20k samples for training, with Adam optimizer and learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of α).
B.4.2 NON-LINEAR WALKS
Following B.4.2, we modify our objective to use discrete step sizes rather than continuous steps. We learn a function F (z) to perform this -step transformation on given latent code z, where F (z) is parametrized with a neural network. We show qualitative results in Fig. 15. We perform the same set of experiments shown in the main text using this nonlinear walk in Fig. 16. These experiments
3https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
exhibit similar trends as we observed in the main text – we are able to modify the generated distribution of images using latent space walks, and the amount to which we can transform is related to the variability in the dataset. However, there are greater increases in FID when we apply the non-linear transformation, suggesting that these generated images deviate more from natural images and look less realistic.
B.4.3 ADDITIONAL QUALITATIVE EXAMPLES
We show qualitative examples for randomly generated categories for BigGAN linear-L2, linear LPIPS, and nonlinear trajectories in Figs. 17, 18, 19 respectively.
B.5 WALKS IN STYLEGAN
We perform similar experiments for linear latent space walks using StyleGAN models trained on the LSUN cat, LSUN car, and FFHQ face datasets. As suggested by Karras et al. (2018), we learn the walk vector in the intermediate W latent space due to improved attribute disentanglement in W . We show qualitative results for color, shift, and zoom transformations in Figs. 20, 22, 24 and corresponding quantitative analyses in Figs. 21, 23, 25. We show qualitative examples for the comparison of optimizing in the W and z latent spaces in Stylegan in 28.
B.6 WALKS IN PROGRESSIVE GAN
We also experiment with the linear walk objective in the latent space of Progressive GAN Karras et al. (2017). One interesting property of the Progressive GAN interpolations is that they take much longer to train to have a visual effect – for example for color, we could obtain drastic color changes in Stylegan W latent space using as few as 2k samples, but with progressive gan, we used 60k samples and still did not obtain as strong of an effect. This points to the Stylegan w latent space being more “flexible” and generalizable for transformation, compared to the latent space of progressive GAN. Moreover, we qualitatively observe some entanglement in the progressive gan transformations – for example, changing the level of zoom also changes the lighting. We did not observe big effects in the horizontal and vertical shift transformations. Qualitative examples and quantitative results are shown in Figs. 26, 27.
B.7 QUALITATIVE EXAMPLES FOR ADDITIONAL TRANSFORMATIONS
Since the color transformation operates on individual pixels, we can optimize the walk using a segmented target – for example when learning a walk for cars, we only modify pixels in segmented car region when generating edit(G(z), α). StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within theW latent space (Fig. 29 left) as also noted in Karras et al. (2018); Shen et al. (2019). We also show qualitative results for adjust image contrast (Fig. 29 right), and for combining zoom, shift X, and shift Y transformations (Fig. 30).
B.8 ADDITIONAL RESULTS FOR IMPROVING MODEL STEERABILITY
We further test the hypothesis that dataset variability impacts the amount we are able to transform by comparing DCGAN models trained with and without data augmentation. Namely, with data augmentation, the discriminator is able to see edited versions of the real images. We also jointly train the model and the walk trajectory which encourages the model to learn linear walks. For zoom, horizontal shift, and 2D rotate transformations, additional samples for three training approaches – without data augmentation, with data augmentation, and joint optimization – appear in Fig. 31-33. Qualitatively, transformations using the model trained without data augmentation degrade the digit structure as α magnitude increases, and may even change one digit to another. Training with data augmentation and joint optimization better preserves digit structure and identity.
ZoomShift YShift XLuminance
Luminance
Rotate 2D
Shift X Shift Y
Zoom
1.0 0.5 0.0 0.5 1.0 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6 0.8 P er ce p tu al D is ta n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
Rotate 3D
Luminance Shift X Shift Y Zoom
Luminance Shift X Shift Y Zoom | 1. What is the main contribution of the paper regarding generative adversarial networks (GANs)?
2. How does the paper explore the natural image manifold captured by GANs?
3. What are the limitations of the GANs regarding transforming images, and how can they be addressed?
4. What are the strengths of the paper, particularly in its experimental results and figures?
5. Do you have any concerns or questions regarding the paper's content? | Review | Review
This work explores the extent to which the natural image manifold is captured by generative adversarial networks (GANs) by performing walks in the latent space of pretrained models. To perform these walks, a transformation vector is learned by minimizing the distance between transformed images and the corresponding images generated from transformed latent vectors. It is found that when traversing the latent space of the GAN along the direction of the transformation vector, that the corresponding generated images initially exhibit the desired transform (such as zooming or changing X position), but soon reach a limit where further changes in the latent vector do not result in changes to the image. It is observed that this behaviour is likely due to bias in the dataset which the GAN is trained on, and that by exploring the limits of the generator, biases which exist in the original dataset can be revealed. In order to increase the extents to which images can be transformed, it is shown that GANs can be trained with an augmented dataset and using a loss function that encourages transformations to lie along linear paths.
Overall, I would tend towards accepting this paper. Improving the amount of control that we have over generative models is desirable for image synthesis, and this paper does a great job of demonstrating the extent to which these models can be manipulated in terms of mimicking basic transforms. Figures are very clean and informative, and experimental results are extensive. I don't have much else to say about this paper, as I did not find anything in it that concerned me, and the paper answered all of my questions. |
ICLR | Title
On the "steerability" of generative adversarial networks
Abstract
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise – these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by “steering” in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
1 INTRODUCTION
The quality of deep generative models has increased dramatically over the past few years. When introduced in 2014, Generative Adversarial Networks (GANs) could only synthesize MNIST digits and low-resolution grayscale faces (Goodfellow et al., 2014). The most recent models, however, produce diverse high-resolution images that are often indistinguishable from natural photos (Brock et al., 2018; Karras et al., 2018).
Science fiction has long dreamed of virtual realities filled of synthetic content as rich as, or richer, than the real world (e.g., The Matrix, Ready Player One). How close are we to this dream? Traditional computer graphics can render photorealistic 3D scenes, but cannot automatically generate detailed content. Generative models like GANs, in contrast, can create content from scratch, but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine.
In this paper, we explore the degree to which you can navigate the visual world of a GAN. Figure 1 illustrates the kinds of transformations we explore. Consider the dog at the top-left. By moving in some direction of GAN latent space, can we hallucinate walking toward this dog? As the figure indicates, and as we will show in this paper, the answer is yes. However, as we continue to zoom in, we quickly reach limits. Once the dog face fills the full frame, continuing to walk in this direction fails to increase the zoom. A similar effect occurs in the daisy example (row 2 of Fig. 1), where a direction in latent space moves the daisy up and down, but cannot move it out of frame.
We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained. For example, if the training dataset consists of centered dogs and daises, the same may be the case in GAN-generated images. Nonetheless, we find that some degree of transformation is possible. When and why can we achieve certain transformations but not others?
This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space. In other words, are GANs “steerable” in latent space?1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations. From our experiments, it is possible to shift the distribution of generated images to some degree, but we cannot extrapolate entirely out of the dataset’s support. In particular, attributes can be shifted in proportion to the variability of that attribute in the training data. We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction, together with data augmentation on training images. One of the current criticisms of generative models is that they simply interpolate between datapoints, and fail to generate anything truly new, but our results add nuance to this story. It is possible to achieve distributional shift, but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary.
Our main findings are:
• A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space. These walks are learned in self-supervised manner without labeled attributes or distinct source and target images.
• The linear walk is as effective as more complex non-linear walks, suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so.
• The extent of each transformation is limited, and we quantify a relationship between dataset variability and how much we can shift the model distribution.
• The transformations are a general-purpose framework that work with different model architectures, e.g. BigGAN, StyleGAN, and DCGAN, and illustrate different disentanglement properties in their respective latent spaces.
• Data augmentation improves steerability, as does jointly training the walk trajectory and the generator weights, which allows us to achieve larger transformation effects.
2 RELATED WORK
Latent space manipulations can be seen from several perspectives – how we achieve it, what limits it, and what it enables us to do. Our work addresses these three aspects together, and we briefly refer to each one in related work.
Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes, such as smile-vectors and gender-vectors for faces (Radford et al., 2015; Karras et al., 2018). However these manipulations are not exclusive to GANs; in flow-based generative models, linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target (Kingma & Dhariwal, 2018). Möllenhoff & Cremers (2019) proposes a modified GAN formulation by treating data
1We use the term “steerable” in analogy to the classic steerable filters of Freeman & Adelson (1991).
as directional k-currents, where moving along tangent planes naturally corresponds to interpretable manipulations. Upchurch et al. (2017) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier, again using feature mappings of source and target sets to determine an edit direction. Unlike these approaches, we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images. Instead, we learn to approximate editing operations on individual source images. We find that linear trajectories in latent space can capture simple image manipulations, e.g., zoom-vectors and shift-vectors, although we also obtain similar results using nonlinear trajectories.
Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models (Torralba & Efros, 2011; Geirhos et al., 2018; Amini et al.). Dataset biases partly comes from human preferences in taking photos: we tend to take pictures in specific “canonical” views that are not fully representative of the entire visual world (Mezuman & Weiss, 2012; Jahanian et al., 2015). Consequently, models trained with these datasets inherit their biases. This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers (Geirhos et al., 2018) – and in turn limits their generalization performance on similar objectives (Azulay & Weiss, 2018). Our latent space trajectories transform the output corresponding to various image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the data’s support.
Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation (Brock et al., 2018; Karras et al., 2018), including applications that enable users to fine-tune the generated output (Simon; Zhu et al., 2016; Bau et al., 2018). A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space. We further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.
Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications. For example, Denton et al. (2019) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. Shen et al. (2019) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement of the latent space. White (2016) suggests approaches to improve the learned manipulations, such as using spherical linear interpolations, resampling images to remove biases in attribute vectors, and using data augmentation as a synthetic attribute for variational autoencoders. Goetschalckx et al. (2019) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks parametrized by neural networks for editing operations.
3 METHOD
Generative models such as GANs (Goodfellow et al., 2014) learn a mapping function G such that G : z → x. Here, z is the latent code drawn from a Gaussian density and x is an output, e.g., an image. Our goal is to achieve transformations in the output space by moving in latent space, as shown in Fig. 2. In general, this goal also captures the idea in equivariance, in which transformations in the input space result in equivalent transformations in the output space (c.f. Hinton et al. (2011); Cohen et al. (2019); Lenc & Vedaldi (2015)).
Objective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation. The vector is multiplied with continuous parameter α which signifies the step size: large α values correspond to a greater degree of transformation, while small α values correspond to a lesser degree. Formally, we learn the walk w by minimizing the objective function:
w∗ = arg min w Ez,α[L(G(z+αw),edit(G(z), α))]. (1)
Here, L measures the distance between the generated image after taking an α-step in the latent direction G(z + αw) and the target edit(G(z), α) derived from the source image G(z). We use L2 loss as our objective L, however we also obtain similar results when using the LPIPS perceptual image similarity metric (Zhang et al., 2018) (see Appendix B.4.1). Note that we can learn this walk in a fully self-supervised manner – we perform the edit(·) operation on an arbitrary generated image and subsequently the vector to minimize the objective. Let model(α) denote the optimized transformation vector w∗ with the step size α, defined as model(α) = G(z + αw∗).
The previous setup assumes linear latent space walks, but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position. For the non-linear walk, we learn a function, f∗(z), which corresponds to a small -step transformation edit(G(z), ). To achieve bigger transformations, we apply f recursively, mimicking discrete Euler ODE approximations. Formally, for a fixed , we minimize
L = Ez,n[||G(fn(z))− edit(G(z), n ))||], (2) where fn(·) is an nth-order function composition f(f(f(...))), and f(z) is parametrized with a neural network. We discuss further implementation details in Appendix A.4. We use this function composition approach rather than the simpler setup of G(z + αNN(z)) because the latter learns to ignore the input z when α takes on continuous values, and is thus equivalent to the previous linear trajectory (see Appendix A.3 for further details).
Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation. To this end, we compare the distribution of a given attribute, e.g., “luminance”, in the dataset versus in images generated after walking in latent space.
For color transformations, we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel. To estimate the color distribution of model-generated images, we randomly sample N = 100 pixels per image both before and after taking a step in latent space. Then, we compute the pixel value for each channel, or the mean RGB value for luminance, and normalize the range between 0 and 1.
For zoom and shift transformations, we rely on an object detector which captures the central object in the image class. We use a MobileNet-SSD v1 (Liu et al., 2016) detector to estimate object bounding boxes, and average over image classes recognizable by the detector. For each successful detection, we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation. For the zoom operation, we use the area of the bounding box normalized by the area of the total image. For shift in the X and Y directions, we take the center X and Y coordinates of the bounding box, and normalize by image width or height.
Truncation parameters in GANs (as used in Brock et al. (2018); Karras et al. (2018)) trade off between the diversity of the generated images and sample quality. When comparing generated images to the dataset distribution, we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training (see Brock et al. (2018)). When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset, we reduce truncation to 0.5 to ensure better performance of the object detector.
Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model, thus keeping the model weights fixed. The previous approach allows us
to understand the latent space organization and limitations in the model’s transformation capacity. To overcome these limits, we explore adding data augmentation by editing the training images with each corresponding transformation, and train the generative model with this augmented dataset. We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector:
G∗, w∗ = arg min G,w (Ledit + LGAN ) , (3)
where the edit loss encourages low L2 error between learned transformation and target image:
Ledit = L2 (G(z+αw)− edit(G(z), α)) . (4) The GAN loss optimizes for discriminator error:
LGAN = max D (Ez,α[D(G(z+αw))]− Ex,α[D(edit(x, α))]) , (5)
where we draw images x from the training dataset and perform data augmentation by applying the edit operation on them. This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths, and when combined with data augmentation, results in larger transformation ranges which we demonstrate in Sec. 4.4
4 EXPERIMENTS
We demonstrate our approach using BigGAN (Brock et al., 2018), a class-conditional GAN trained on 1000 ImageNet categories. We learn a shared latent space walk by averaging across the image categories, and further quantify how this walk affects each class differently. We focus on linear walks in latent space for the main text, and show additional results on nonlinear walks in Sec. 4.3 and Appendix B.4.2. We also conduct experiments on StyleGAN (Karras et al., 2018), which uses an unconditional style-based generator architecture in Sec. 4.3 and Appendix B.5.
4.1 WHAT IMAGE TRANSFORMATIONS CAN WE ACHIEVE IN LATENT SPACE?
We show qualitative results of the learned transformations in Fig. 1. By steering in the generator latent space, we learn a variety of transformations on a given source image (shown in the center panel of each transformation). Interestingly, several priors come into play when learning these image transformations. When we shift a daisy downwards in the Y direction, the model hallucinates that the sky exists on the top of the image. However, when we shift the daisy up, the model inpaints the remainder of the image with grass. When we alter the brightness of a image, the model transitions between nighttime and daytime. This suggests that the model can extrapolate from the original source image, and still remain consistent with the image context.
However, when we increase the step size of α, we observe that the degree to which we can achieve each transformation is limited. In Fig. 3 we observe two potential failure cases: one in which the the image becomes unrealistic, and the other in which the image fails to transform any further. When we try to zoom in on a Persian cat, we observe that the cat no longer increases in size beyond some point, and in fact consistently undershoots the target zoom. On the other hand, when we try to zoom out on the cat, we observe that it begins to fall off the image manifold, and does not become any smaller after some point. Indeed, the perceptual distance (using LPIPS) between images decreases as we push α towards the transformation limits. Similar trends hold with other transformations: we are able to shift a lorikeet up and down to some degree until the transformation yields unrealistic output, and despite adjusting α on the rotation vector, we are unable to rotate a pizza. Are the limitations to these transformations governed by the training dataset? In other words, are our latent space walks limited because in ImageNet photos the cats are mostly centered and taken within a certain size? We seek to investigate and quantify these biases in the next sections.
An intriguing characteristic of the learned trajectory is that the amount it affects the output depends on the image class. In Fig. 4, we investigate the impact of the walk for different image categories under color transformations. By moving in the direction of a redness vector, we are able to successfully recolor a jellyfish, but we are unable to change the color of a goldfinch, which remains yellow which slight changes in background textures. Likewise, increasing brightness changes an erupting volcano to a dormant one, but does not have much effect on Alps, which only transitions between night and day. In the third example, we use our latent walk to turn red sports cars to blue, but it cannot recolor firetrucks. Again, perceptual distance over image samples confirms these qualitative observations: a 2-sample t-test yields t = 20.77, p < 0.001 for jellyfish/goldfinch, t = 8.14, p < 0.001 for volcano/alp, and t = 6.84, p < 0.001 for sports car/fire engine. We hypothesize that the different impact of the shared transformation on separate image classes relates to the variability in the underlying dataset. The overwhelming majority of firetrucks are red2, but sports cars appear in a variety of colors. Therefore, our color transformation is constrained by the dataset biases of individual classes.
With shift, we can move the distribution of the center object by varying α. In the underlying model, the center coordinate of the object is most concentrated at half of the image width and height, but after applying the shift in X and shift in Y transformation, the mode of the transformed distribution varies between 0.3 and 0.7 of the image width/height. To quantify the distribution changes, we compute the area of intersection between the original model distribution and the distribution after applying each transformation and observe that the intersection decreases as we increase or decrease the magnitude of α. However, our transformations are limited to a certain extent – if we increase α
2but apparently blue fire trucks do exist! (DiGrazia, 2019)
beyond 150 pixels for vertical shifts, we start to generate unrealistic images, as evidenced by a sharp rise in FID and converging modes in the transformed distributions (Fig. 5 columns 2 & 3).
We perform a similar procedure for zoom, by measuring the area of the bounding box for the detected object under different magnitudes of α. Like shift, we observe that subsequent increases in α magnitude start to have smaller and smaller effects on the mode of the resulting distribution (Fig. 5 last column). Past an 8x zoom in or out, we observe an increase in the FID signifying decreasing image quality. Interestingly for zoom, the FID under zooming in and zooming out is anti-symmetric, indicating that how well we can zoom-in and retain realisitic images differs from that of zooming out. These trends are consistent with the plateau in transformation behavior that we qualitatively observe in Fig. 3. Although we can arbitrarily increase the α step size, after some point we are unable to achieve further transformation and risk deviating from the natural image manifold.
4.2 HOW DOES THE DATA AFFECT THE TRANSFORMATIONS?
Is the extent to which we can transform each class, as we observed in Fig. 4, due to limited variability in the underlying dataset for each class? One way of quantifying this is to measure the difference in transformed model means, model(+α) and model(-α), and compare it to the spread of the dataset distribution. For each class, we compute standard deviation of the dataset with respect to our statistic of interest (pixel RGB value for color, and bounding box area and center value for zoom and shift transformations respectively). We hypothesize that if the amount of transformation is biased depending on the image class, we will observe a correlation between the distance of the mean shifts and the standard deviation of the data distribution.
More concretely, we define the change in model means under a given transformation as:
∆µk = µk,model(+α∗) − µk,model(-α∗) (6) for a given class k and we set α∗ to be largest and smallest α values used in training. The degree to which we achieve each transformation is a function of α, so we use the same α value for all classes – one that is large enough to separate the means of µk,model(α∗) and µk,model(-α∗) under
transformation, but also for which the FID of the generated distribution remains below a threshold T of generating reasonably realistic images (for our experiments we use T = 22).
In Fig. 6 we plot the standard deviation σ of the dataset on the x-axis, and the model ∆µ under a +α∗ and −α∗ transformation on the y-axis, as defined in Eq. 6. We sample randomly from 100 classes for the color, zoom and shift transformations, and generate 200 samples of each class under the positive and negative transformations. We use the same setup of drawing samples from the model and dataset and computing the statistics for each transformation as described in Sec. 4.1.
Indeed, we find that the width of the dataset distribution, captured by the standard deviation of random samples drawn from the dataset for each class, relates to how much we can transform. There is a positive correlation between the spread of the dataset and the magnitude of ∆µ observed in the transformed model distributions, and the slope of all observed trends differs significantly from zero (p < 0.001 for all transformations). For the zoom transformation, we show examples of two extremes along the trend. For the “robin” class the spread σ in the dataset is low, and subsequently, the separation ∆µ that we are able to achieve by applying +α∗ and −α∗ transformations is limited. On the other hand, for “laptops”, the dataset spread is broad; ImageNet contains images of laptops of various sizes, and we are able to attain wider shifts in the model distribution.
From these results, we conclude that the amount of transformation we can achieve relates to the dataset variability. Consistent with our qualitative observations in Fig. 4, we find that if the images for a particular class have adequate coverage over the entire range of a given transformation, then we are better able to move the model distribution to both extremes. On the other hand, if the images for a given class are less diverse, the transformation is limited by this dataset bias.
4.3 ALTERNATIVE ARCHITECTURES AND WALKS
We ran an identical set of experiments using the nonlinear walk in the BigGAN latent space (Eq 2) and obtained similar quantitative results. To summarize, the Pearson’s correlation coefficient between dataset σ and model ∆µ for linear walks and nonlinear walks is shown in Table 1, and full results in Appendix B.4.2. Qualitatively, we observe that while the linear trajectory undershoots the targeted level of transformation, it is able to preserve more realistic-looking results (Fig. 7). The
transformations involve a trade-off between minimizing the loss and maintaining realistic output, and we hypothesize that the linear walk functions as an implicit regularizer that corresponds well with the inherent organization of the latent space.
To test the generality of our findings across model architecture, we ran similar experiments on StyleGAN, in which the latent space is divided into two spaces, z and W . As Karras et al. (2018) notes that the W space is less entangled than z, we apply the linear walk to W and show results in Fig. 8 and Appendix B.5. One interesting aspect of StyleGAN is that we can change color while leaving other structure in the image unchanged. In other words, while green faces do not naturally exist in the dataset, the StyleGAN model is still able to generate them. This differs from the behavior of BigGAN, where changing color results in different semantics in the image, e.g., turning a dormant volcano to an active one. StyleGAN, however, does not preserve the exact geometry of objects under other transformations, e.g., zoom and shift (see Appendix B.5).
4.4 TOWARDS STEERABLE GANS
So far, we have frozen the parameters of the generative model when learning a latent space walk for image editing, and observe that the transformations are limited by dataset bias. Here we investigate approaches to overcome these limitations and increase model steerability. For these experiments, we use a class-conditional DCGAN model (Radford et al., 2015) trained on MNIST digits (LeCun, 1998).
To study the effect of dataset biases, we train (1) a vanilla DCGAN and (2) a DCGAN with data augmentation, and then learn the optimal walk in Eq. 1 after the model has been trained – we refer to these two approaches in Fig. 9 as argmin W and argmin W + aug, respectively. We observe that adding data augmentation yields transformations that better approximate the target image and
attain lower L2 error than the vanilla DCGAN (blue and orange curves in Fig. 9). Qualitatively, we observe that transformations using the vanilla GAN (argmin W) become patchy and unrealistic as we increase the magnitude of α, but when the model is trained with data augmentation (argmin W + aug), the digits retain their structural integrity.
Rather than learning the walk vector w assuming a frozen generator, we may also jointly optimize the model and linear walk parameter together, as we formalized in Eq. 3. This allows the model to learn an equivariance between linear directions in the latent space and the corresponding image transformations. We refer to this model as argmin G,W in Fig. 9. Compared to the frozen generator (in argmin W and argmin W + aug), the joint objective further decreases L2 error (green curve in Fig. 9). We show additional qualitative examples in Appendix B.8. The steerable range of the generator increases with joint optimization and data augmentation, which provides additional evidence that training data bias impacts the models’ steerability and generalization capacity. We tried DCGAN on CIFAR10 as a more complicated dataset, however were unable to get steering to be effective – all three methods failed to produce realistic transformations and joint training in fact performed the worst. Finding the right steering implementation per GAN and dataset, especially for joint training, may be a difficult problem and an interesting direction for future work.
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Rotate 2D
0.5 0.0 0.5 log(↵)
0.0
0.1
0.2
0.3
L 2
E rr
or
Zoom
5 0 5 ↵
0.0
0.1
0.2 0.3 L 2 E rr or
Shift X
argmin W argmin W + aug
argmin G,W
20 0 20 ↵
0.0
0.1
0.2
0.3
L 2
E rr
or argmin W
argmin W + aug
argmin G,W
Figure 9: Reducing the effect of transformation limits. Using a DCGAN model on MNIST digits, we compare the L2 reconstruction errors on latent space walks for models trained with vanilla GANs without (argmin W) and with data augmentation (argmin W + aug). We also compare to jointly optimizing the generator and the walk parameters with data augmentation (argmin G,W), which achieves the lowest L2 error.
5 CONCLUSION
GANs are powerful generative models, but are they simply replicating the existing training datapoints, or can they to generalize beyond the training distribution? We investigate this question by exploring walks in the latent space of GANs. We optimize trajectories in latent space to reflect simple image transformations in the generated output, learned in a self-supervised manner. We find that the model is able to exhibit characteristics of extrapolation – we are able to “steer” the generated output to simulate camera zoom, horizontal and vertical movement, camera rotations, and recolorization. However, our ability to naively move the distribution is finite: we can transform images to some degree but cannot extrapolate entirely outside the support of the training data. To increase model steerability, we add data augmentation during training and jointly optimize the model and walk trajectory. Our experiments illustrate the connection between training data bias and the resulting distribution of generated images, and suggest methods for extending the range of images that the models are able to create.
ACKNOWLEDGEMENTS
We would like to thank Quang H Le, Lore Goetschalckx, Alex Andonian, David Bau, and Jonas Wulff for helpful discussions. This work was supported by a Google Faculty Research Award to P.I., and a U.S. National Science Foundation Graduate Research Fellowship to L.C.
A METHOD DETAILS
A.1 OPTIMIZATION FOR THE LINEAR WALK
We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space z. We share the vector w across all ImageNet categories for the BigGAN model.
A.2 IMPLEMENTATION DETAILS FOR LINEAR WALK
We experiment with a number of different transformations learned in the latent space, each corresponding to a different walk vector. Each of these transformations can be learned without any direct supervision, simply by applying our desired edit to the source image. Furthermore, the parameter α allows us to vary the extent of the transformation. We found that a slight modification to each transformation improved the degree to which we were able to steer the output space: we scale α differently for the learned transformation G(z + αgw), and the target edit edit(G(z), αt). We detail each transformation below:
Shift. We learn transformations corresponding to shifting an image in the horizontal X direction and the vertical Y direction. We train on source images that are shifted −αt pixels to the left and αt pixels to the right, where we set αt to be between zero and one-half of the source image width or height D. When training the walk, we enforce that the αg parameter ranges between -1 and 1; thus for a random shift by t pixels, we use the value αg = αt/D. We apply a mask to the shifted image, so that we only apply the loss function on the visible portion of the source image. This forces the generator to extrapolate on the obscured region of the target image.
Zoom. We learn a walk which is optimized to zoom in and out up to four times the original image. For zooming in, we crop the central portion of the source image by some αt amount, where 0.25 < αt < 1 and resize it back to its original size. To zoom out, we downsample the image by αt where 1 < αt < 4. To allow for both a positive and negative walk direction, we set αg = log(αt). Similar to shift, a mask applied during training allows the generator to inpaint the background scene.
Color. We implement color as a continuous RGB slider, e.g., a 3-tuple αt = (αR, αG, αB), where each αR, αG, αB can take values between [−0.5, 0.5] in training. To edit the source image, we simply add the corresponding αt values to each of the image channels. Our latent space walk is parameterized as z + αgw = z + αRwR + αGwG + αBwB where we jointly learn the three walk directions wR, wG, and wB .
Rotate in 2D. Rotation in 2D is trained in a similar manner as the shift operations, where we train with −45 ≤ αt ≤ 45 degree rotation. Using R = 45, scale αg = αt/R. We use a mask to enforce the loss only on visible regions of the target.
Rotate in 3D. We simulate a 3D rotation using a perspective transformation along the Z-axis, essentially treating the image as a rotating billboard. Similar to the 2D rotation, we train with −45 ≤ αt ≤ 45 degree rotation, we scale αg = αt/R where R = 45, and apply a mask during training.
A.3 LINEAR NN(z) WALK
Rather than defining w as a vector in z space (Eq. 1), one could define it as a function that takes a z as input and maps it to the desired z′ after taking a variable-sized step α in latent space. In this case, we may parametrize the walk with a neural network w = NN(z), and transform the image using G(z + αNN(z)). However, as we show in the following proof, this idea will not learn to let w be a function of z.
Proof. For simplicity, let w = F (z). We optimize for J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] where α is an arbitrary scalar value. Note that for the target image, two equal edit operations is equivalent to performing a single edit of twice the size (e.g., shifting by 10px the same as shifting by 5px twice; zooming by 4x is the same as zooming by 2x twice). That is,
edit(G(z), 2α) = edit(edit(G(z), α), α).
To achieve this target, starting from an initial z, we can take two steps of size α in latent space as follows:
z1 = z + αF (z)
z2 = z1 + αF (z1)
However, because we let α take on any scalar value during optimization, our objective function enforces that starting from z and taking a step of size 2α equals taking two steps of size α:
z + 2αF (z) = z1 + αF (z1) (7)
Therefore: z + 2αF (z) = z + αF (z) + αF (z1)⇒
αF (z) = αF (z1)⇒ F (z) = F (z1).
Thus F (·) simply becomes a linear trajectory that is independent of the input z.
A.4 OPTIMIZATION FOR THE NON-LINEAR WALK
Given the limitations of the previous walk, we define our nonlinear walk F (z) using discrete step sizes . We define F (z) as z+NN(z), where the neural network NN learns a fixed step transformation, rather than a variable α step. We then renormalize the magnitude z. This approach mimics the Euler method for solving ODEs with a discrete step size, where we assume that the gradient of the transformation in latent space is of the form dzdt = NN(z) and we approximate zi+1 = zi + dz dt |zi . The key difference from A.3 is the fixed step size, which avoids optimizing for the equality in (7).
We use a two-layer neural network to parametrize the walk, and optimize over 20000 samples using the Adam optimizer as before. Positive and negative transformation directions are handled with two neural networks having identical architecture but independent weights. We set to achieve the same transformation ranges as the linear trajectory within 4-5 steps.
B ADDITIONAL EXPERIMENTS
B.1 MODEL AND DATA DISTRIBUTIONS
How well does the model distribution of each property match the dataset distribution? If the generated images do not form a good approximation of the dataset variability, we expect that this would also impact our ability to transform generated images. In Fig. 10 we show the attribute distributions of the BigGAN model G(z) compared to samples from the ImageNet dataset. We show corresponding results for StyleGAN and its respective datasets in Appendix B.5. While there is some bias in how well model-generated images approximate the dataset distribution, we hypothesize that additional biases in our transformations come from variability in the training data.
B.2 QUANTIFYING TRANSFORMATION LIMITS
We observe that when we increase the transformation magnitude α in latent space, the generated images become unrealistic and the transformation ceases to have further effect. We show this qualitatively in Fig. 3. To quantitatively verify this trends, we can compute the LPIPS perceptual distance of images generated using consecutive pairs of αi and αi+1. For shift and zoom transformations, perceptual distance is larger when α (or log(α) for zoom) is near zero, and decreases as the the magnitude of α increases, which indicates that large α magnitudes have a smaller transformation effect, and the transformed images appear more similar. On the other hand, color and rotate in 2D/3D exhibit a steady transformation rate as the magnitude of α increases.
Note that this analysis does not tell us how well we achieve the specific transformation, nor whether the latent trajectory deviates from natural-looking images. Rather, it tells us how much we manage to change the image, regardless of the transformation target. To quantify how well each transformation is achieved, we rely on attribute detectors such as object bounding boxes (see B.3).
B.3 DETECTED BOUNDING BOXES
To quantify the degree to which we are able to achieve the zoom and shift transformations, we rely on a pre-trained MobileNet-SSD v13 object detection model. In Fig. 12 and 13 we show the results of applying the object detection model to images from the dataset, and images generated by the model under the zoom, horizontal shift, and vertical shift transformations for randomly selected values of α, to qualitatively verify that the object detection boundaries are reasonable. Not all ImageNet images contain recognizable objects, so we only use ImageNet classes containing objects recognizable by the detector for this analysis.
B.4 ALTERNATIVE WALKS IN BIGGAN
B.4.1 LPIPS OBJECTIVE
In the main text, we learn the latent space walk w by minimizing the objective function:
J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] . (8) using a Euclidean loss for L. In Fig. 14 we show qualitative results using the LPIPS perceptual similarity metric (Zhang et al., 2018) instead of Euclidean loss. Walks were trained using the same parameters as those in the linear-L2 walk shown in the main text: we use 20k samples for training, with Adam optimizer and learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of α).
B.4.2 NON-LINEAR WALKS
Following B.4.2, we modify our objective to use discrete step sizes rather than continuous steps. We learn a function F (z) to perform this -step transformation on given latent code z, where F (z) is parametrized with a neural network. We show qualitative results in Fig. 15. We perform the same set of experiments shown in the main text using this nonlinear walk in Fig. 16. These experiments
3https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
exhibit similar trends as we observed in the main text – we are able to modify the generated distribution of images using latent space walks, and the amount to which we can transform is related to the variability in the dataset. However, there are greater increases in FID when we apply the non-linear transformation, suggesting that these generated images deviate more from natural images and look less realistic.
B.4.3 ADDITIONAL QUALITATIVE EXAMPLES
We show qualitative examples for randomly generated categories for BigGAN linear-L2, linear LPIPS, and nonlinear trajectories in Figs. 17, 18, 19 respectively.
B.5 WALKS IN STYLEGAN
We perform similar experiments for linear latent space walks using StyleGAN models trained on the LSUN cat, LSUN car, and FFHQ face datasets. As suggested by Karras et al. (2018), we learn the walk vector in the intermediate W latent space due to improved attribute disentanglement in W . We show qualitative results for color, shift, and zoom transformations in Figs. 20, 22, 24 and corresponding quantitative analyses in Figs. 21, 23, 25. We show qualitative examples for the comparison of optimizing in the W and z latent spaces in Stylegan in 28.
B.6 WALKS IN PROGRESSIVE GAN
We also experiment with the linear walk objective in the latent space of Progressive GAN Karras et al. (2017). One interesting property of the Progressive GAN interpolations is that they take much longer to train to have a visual effect – for example for color, we could obtain drastic color changes in Stylegan W latent space using as few as 2k samples, but with progressive gan, we used 60k samples and still did not obtain as strong of an effect. This points to the Stylegan w latent space being more “flexible” and generalizable for transformation, compared to the latent space of progressive GAN. Moreover, we qualitatively observe some entanglement in the progressive gan transformations – for example, changing the level of zoom also changes the lighting. We did not observe big effects in the horizontal and vertical shift transformations. Qualitative examples and quantitative results are shown in Figs. 26, 27.
B.7 QUALITATIVE EXAMPLES FOR ADDITIONAL TRANSFORMATIONS
Since the color transformation operates on individual pixels, we can optimize the walk using a segmented target – for example when learning a walk for cars, we only modify pixels in segmented car region when generating edit(G(z), α). StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within theW latent space (Fig. 29 left) as also noted in Karras et al. (2018); Shen et al. (2019). We also show qualitative results for adjust image contrast (Fig. 29 right), and for combining zoom, shift X, and shift Y transformations (Fig. 30).
B.8 ADDITIONAL RESULTS FOR IMPROVING MODEL STEERABILITY
We further test the hypothesis that dataset variability impacts the amount we are able to transform by comparing DCGAN models trained with and without data augmentation. Namely, with data augmentation, the discriminator is able to see edited versions of the real images. We also jointly train the model and the walk trajectory which encourages the model to learn linear walks. For zoom, horizontal shift, and 2D rotate transformations, additional samples for three training approaches – without data augmentation, with data augmentation, and joint optimization – appear in Fig. 31-33. Qualitatively, transformations using the model trained without data augmentation degrade the digit structure as α magnitude increases, and may even change one digit to another. Training with data augmentation and joint optimization better preserves digit structure and identity.
ZoomShift YShift XLuminance
Luminance
Rotate 2D
Shift X Shift Y
Zoom
1.0 0.5 0.0 0.5 1.0 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6 0.8 P er ce p tu al D is ta n ce
400 200 0 200 400 ↵
0.0
0.2
0.4
0.6
0.8
P er
ce p tu
al D
is ta
n ce
Rotate 3D
Luminance Shift X Shift Y Zoom
Luminance Shift X Shift Y Zoom | 1. What are the main contributions and findings of the paper regarding GANs' generalization properties?
2. What are the strengths of the proposed approach, particularly in comparing different GAN models and datasets?
3. Do you have any concerns or suggestions regarding the experimental design and analysis?
4. How do the interpolation methods impact the quality of generated images, and how does joint training of the generator and interpolation affect the overall performance?
5. Are there any minor issues or typos in the paper that should be addressed? | Review | Review
This paper propose to study the generalization properties of GANs through interpolation. They first propose to learn a linear (and non-linear) interpolation in the latent space for a specific type of image transformation for example zoom, translation, rotation, luminance, etc... They show that linear interpolation in GANs can produce really realistic images along the path and enable to control and transform generated images to some extent. They then propose to measure to what extent the generated images can be transformed without "breaking". Finally they show that the quality of the interpolation can be improved by learning the interpolation and generator jointly.
I'm in favour of accepting this paper. The paper is well written and organized. The experiments and observations are very interesting and really illustrate the generalization capacity of GANs.
Main argument:
- I think those observations are very valuable to the community and are a good way to get insight into the capabilities of GANs. This also give interesting informations about the different bias present and learnt in the dataset. This could also lead to very nice applications.
- The interpolation with StyleGAN and BigGAN seem to give qualitatively very different results. It would have been very interesting to study the quality of interpolations on more models and datasets, and compare their generalization capabilities as well as the bias present in the different datasets.
- Does training the generator and interpolation jointly improve the quality of the generator in general ? It would have been nice to run this method on more complicated dataset like CIFAR10 and see if this method increase the overall FID score.
Minor comments:
- In appendix A.2 the authors explain how the range of $\alpha$ is set for the different experiments. However it's not clear how is this range used in practice ? Do you sample uniformly $\alpha$ in this range to train the linear interpolation ? Also how many steps are required to learn the linear interpolation ? How much the does it influence the quality of the interpolation ?
- There is a typo in equation 6
- In figure 6: What does the right figure represent ? especially what are the different colours ? |
ICLR | Title
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Abstract
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both inand out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best leverage this auxiliary information for the prediction task? Empirically across three image and time-series datasets, and theoretically in a multi-task linear regression setting, we show that (i) using auxiliary information as input features improves in-distribution error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error.
1 INTRODUCTION
When models are tested on distributions that are different from the training distribution, they typically suffer large drops in performance (Blitzer and Pereira, 2007; Szegedy et al., 2014; Jia and Liang, 2017; AlBadawy et al., 2018; Hendrycks et al., 2019a). For example, in remote sensing, central tasks include predicting poverty, crop type, and land cover from satellite imagery for downstream humanitarian, policy, and environmental applications (Xie et al., 2016; Jean et al., 2016; Wang et al., 2020; Rußwurm et al., 2020). In some developing African countries, labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys (Jean et al., 2016). To make accurate predictions in these countries, we must extrapolate to out-of-distribution (OOD) examples across different geographic terrains and political borders.
We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution (e.g., global satellite imagery). While labels are scarce, auxiliary information is often cheaply available for every input and may provide some signal for the missing labels. Auxiliary information can come from additional data sources (e.g., climate data from other satellites) or derived from the original input (e.g., background or non-visible spectrum image channels). This auxiliary information is often discarded or not leveraged, and how to best use them is unclear. One way is to use them directly as input features (aux-inputs); another is to treat them as prediction outputs for an auxiliary task (aux-outputs) in pre-training. Which approach leads to better in-distribution or OOD performance?
Aux-inputs provide more features to potentially improve in-distribution performance, and one may hope that this also improves OOD performance. Indeed, previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy (Recht et al., 2019; Taori et al., 2020; Xie et al., 2020; Santurkar et al., 2020). However, in this paper we find that aux-inputs can introduce more spurious correlations with the labels: as a result, while aux-inputs often improve in-distribution accuracy, they can worsen OOD accuracy. We give examples of this trend on CelebA (Liu et al., 2015) and real-world satellite datasets in Sections 5.2 and 5.3.
Conversely, aux-output methods such as pre-training may improve OOD performance through auxiliary supervision (Caruana, 1997; Weiss et al., 2016; Hendrycks et al., 2019a). Hendrycks et al.
∗Equal contribution.
𝑥
𝑧
𝑤
𝑦
𝑢
𝐵∗
𝐴∗ 𝐶∗ 𝜃" 𝜃#
Figure 2: Graphical model for our theoretical setting: prediction task with input x, target y, and auxiliary information z, which is related to y through the latent variable w and latent noise u.
(2019a) show that pre-training on ImageNet can improve adversarial robustness, and Hendrycks et al. (2019b) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions. In this paper, we find that while aux-outputs improve OOD accuracy, the in-distribution accuracy is worse than with aux-inputs. Thus, we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs.
To theoretically study how to best use auxiliary information, we extend the multi-task linear regression setting (Du et al., 2020; Tripuraneni et al., 2020) to allow for distribution shifts. We show that auxiliary information helps in-distribution error by providing useful features for predicting the target, but the relationship between the aux-inputs and the target can shift significantly OOD, worsening the OOD error. In contrast, the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space. We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information.
Can we do better than using auxiliary information as inputs or outputs alone? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs (Figure 1). In-N-Out first uses an aux-inputs model, which has good in-distribution accuracy, to pseudolabel in-distribution unlabeled data. It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data. We prove that In-N-Out, which combines self-training and pre-training, further improves both in-distribution and OOD error over the aux-outputs model.
We show empirical results on CelebA and two remote sensing tasks (land cover and cropland prediction) that parallel the theory. On all datasets, In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2% in-distribution, 2–3% OOD over not using auxiliary information on remote sensing tasks. Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone (up to 5% in-distribution, 1–2% OOD on remote sensing tasks). We also find that using OOD (rather than in-distribution) unlabeled examples for pre-training is crucial for OOD improvements.
2 SETUP
Let x∈Rd be the input (e.g., a satellite image), y ∈R be the target (e.g., crop type), and z ∈RT be the cheaply obtained auxiliary information either from additional sources (e.g., climate information) or derived from the original data (e.g., background).
Training data. Let Pid and Pood denote the underlying distribution of (x,y,z) triples in-distribution and out-of-distribution, respectively. The training data consists of (i) in-distribution labeled data {(xi, yi, zi)}ni=1 ∼ Pid, (ii) in-distribution unlabeled data {(xidi , zidi )} mid i=1 ∼ Pid, and (iii) out-of-distribution unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood.
Goal and risk metrics. Our goal is to learn a model from input and auxiliary information to the target, f :Rd×RT →R. For a loss function `, the in-distribution population risk of the model f is Rid(f)=Ex,y,z∼Pid [`(f(x,z),y)], and its OOD population risk isRood(f)=Ex,y,z∼Pood [`(f(x,z),y)].
2.1 MODELS
We consider three common ways to use the auxiliary information (z) to learn a model.
Baseline. The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information (accomplished by setting z to 0):
f̂bs =argmin f
1
n n∑ i=1 `(f(xi,0),yi). (1)
Aux-inputs. The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features:
f̂in =argmin f
1
n n∑ i=1 `(f(xi,zi),yi). (2)
Aux-outputs. The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task, in hopes that there is a low-dimensional feature representation that is common to predicting both z and y. Training the aux-outputs model consists of two steps:
In the pre-training step, we use all the unlabeled data to learn a shared feature representation. Let h :Rd→Rk denote a feature map and gz-out :Rk→RT denote a mapping from feature representation to the auxiliary outputs. Let `aux denote the loss function for the auxiliary information. We define the empirical risk of h and gz-out as:
R̂pre(h,gz-out)= 1
mid+mood (mid∑ i=1 `aux(gz-out(h(x id i )),z id i )+ mood∑ i=1 `aux(gz-out(h(x ood i )),z ood i ) ) . (3)
The estimate of the feature map is ĥout =argminhmingz-outR̂pre(h,gz-out).
In the transfer step, the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out :Rk→R from feature representation to target y. We define the transfer empirical risk as:
R̂trans(ĥout,gy-out)= 1
n n∑ i=1 `(gy-out(ĥout(xi)),yi) (4)
The estimate of the target mapping is ĝy-out = argmingy-out R̂trans(ĥout,gy-out). The final aux-outputs model is
f̂out(x,z)= ĝy-out(ĥout(x)). (5)
Like the baseline model, the aux-outputs model ignores the auxiliary information for prediction.
3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS
We now analyze the baseline, aux-inputs, and aux-outputs models introduced in Section 2. Our setup extends a linear regression setting commonly used for analyzing multi-task problems (Du et al., 2020; Tripuraneni et al., 2020).
Setup. See Figure 2 for the graphical model. Letw=B?x∈Rk be a low-dimensional latent feature (k≤d) shared between auxiliary information z and the target y. Let u∈Rm denote unobserved latent variables not captured in x. We assume z and y are linear functions of u andw:
y=θ>ww+θ > u u+ , (6) z=A?w+C?u, (7)
where ∼ P denotes noise with mean 0 and variance σ2. As in Du et al. (2020), we assume the dimension of the auxiliary information T is greater than the feature dimension k, that is T ≥k, and thatA?,B? andC? have full rank (rank k). We also assume T ≥m, wherem is the dimension of u. Data. Let Px and Pu denote the distribution of x and u in-distribution (ID), and let P ′x, P ′u denote the distribution x and uOOD. We assume x and u are independent, have distributions with bounded density everywhere, and have invertible covariance matrices. We assume the mean of u is zero in-
and out-of-distribution1. We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD, a common assumption in unsupervised domain adaptation theory (Sugiyama et al., 2007; Kumar et al., 2020; Raghunathan et al., 2020).
Loss metrics. We use the squared loss for the target and auxiliary losses: `(ŷ,y) = (y− ŷ)2 and `aux(z,z ′)=‖z−z′‖22.
Models. We assume all model families (f , h, gz-out, gy-out) in Section 2 are linear.
Let S=(A?,B?,C?,θw,θu,Px,Pu) denote a problem setting which satisfies all the above assumptions.
3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION, BUT CAN HURT OOD
We first show that the aux-inputs model (2) performs better than the baseline model (1) in-distribution. Intuitively, the target y depends on both the inputs x (throughw) and latent variable u (Figure 2). The baseline model only uses x to predict y; thus it cannot capture the variation in y due to u. On the other hand, the aux-inputs model uses x and z to predict y. Since z is a function of x (through w) and u, u can be recovered from x and z by inverting this relation. Note that u is unobserved but implicitly recovered. The aux-inputs model can then combine u and x to predict y better.
Let σ2u=Eu∼Pu [(θ>u u)2] denote the (in-distribution) variance of y due to the latent variables u. The following proposition shows that if σ2u>0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2
Proposition 1. For all problem settings S, P , assuming regularity conditions (bounded x, u, sub-Gaussian noise , and T =m), and σ2u>0, for all δ>0, there existsN such that for n≥N number of training points, with probability at least 1−δ over the training examples, the aux-inputs model improves over the baseline:
Rid(f̂in)<Rid(f̂bs). (8)
Although using z as input leads to better in-distribution performance, we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples. Intuitively, the aux-inputs model uses z, which can be unreliable OOD because z depends on u and u can shift OOD. In more detail, the aux-inputs model learns to predict ŷ= θ̂>x,inx+θ̂ > z,inz, where the true output y=θ>x x+θ > z z, and θ̂z,in is an approximation to the true parameter θz , that has some error. Out-of-distribution u and hence z can have very high variance, which would magnify (θ̂z,in−θz)>z and lead to bad predictions.
Example 1. There exists a problem setting S , P , such that for every n, there is some test distribution P ′x,P ′ u with:
E[Rood(f̂in)]>E[Rood(f̂bs)] (9)
3.2 PRE-TRAINING IMPROVES RISK UNDER ARBITRARY COVARIATE SHIFT
While using z as inputs (aux-inputs) can worsen performance relative to the baseline, our first main result is that the aux-outputs model (which pre-trains to predict z from x, and then transfers the learned representation to predict y from x) outperforms the baseline model for all test distributions.
Intuition. Referring to Figure 2, we see that the mapping from inputs x to auxiliary z passes through the lower dimensional features w. In the pre-training step, the aux-outputs model predicts z from x using a low rank linear model, and we show that this recovers the ‘bottleneck’ features w (up to symmetries; more formally we recover the rowspace of B?). In the transfer step, the aux-outputs model learns a linear map from the lower-dimensionalw to y, while the baseline predicts y directly from x. To warm up, without distribution shift, the expected excess risk only depends on the dimension of the input, and not the conditioning. That is, the expected excess risk in linear regression is exactly dσ2/n, where d is the input dimension, so the aux-outputs trivially improves over the baseline since dim(w)<dim(x). In contrast, the worst case risk under distribution shift depends on the conditioning of the data, which could be worse for w than x. Our proof shows that the worst case risk (over all x and u) is still better for the aux-outputs model because projecting to the low-dimensional feature representation “zeroes-out” some error directions.
1This is not limiting because bias in z can be folded into x. 2Since z is typically low-dimensional and x is high-dimensional (e.g., images), the aux-inputs model needs
only a slightly larger number of examples before it outperforms the baseline.
Algorithm 1 In-N-Out Require: in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid,
in-distribution unlabeled data {(xidi ,zidi )} mid i=1∼Pid, OOD unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood
1: Learn f̂in : (x,z) 7→y from in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid 2: Pre-train gz-out◦ĥout :x 7→z on aux-outputs from all unlabeled data {(xidi ,zidi )} mid i=1∪{(x ood i ,z ood i )} mood i=1 3: Return f̂= ĝ◦ĥout :x 7→y trained on labeled and pseudolabeled data {(xi,yi)}ni=1∪{(xidi ,f̂in(xidi ,zidi )} mid i=1
Theorem 1. For all problem settings S , noise distributionsP , test distributionsP ′x ,P ′u, and n≥m+d number of training points:
E[Rood(f̂out)]≤E[Rood(f̂bs)]. (10)
See Appendix A for the proof.
4 IN-N-OUT: COMBINING AUXILIARY INPUTS AND OUTPUTS
We propose the In-N-Out algorithm, which combines both the aux-inputs and aux-outputs models for further complementary gains (Figure 1). As a reminder: (i) The aux-inputs model (x,z→y) is good in-distribution, but bad OOD because z can be misleading OOD. (ii) The aux-outputs model (x→y) is better than the baseline OOD, but worse than aux-inputs in-distribution because it doesn’t use z. (iii) We propose the In-N-Out model (x→y), which uses pseudolabels from aux-inputs (stronger model) in-distribution to transfer in-distribution accuracy to the aux-outputs model. The In-N-Out model does not use z to make predictions since z can be misleading / spurious OOD.
In more detail, we use the aux-inputs model (which is good in-distribution) to pseudolabel in-distribution unlabeled data. The pseudolabeled data provides more effective training samples (self-training) to fine-tune an aux-outputs model pre-trained on predicting auxiliary information from all unlabeled data. We present the general In-N-Out algorithm in Algorithm 1 and analyze it in the linear multi-task regression setting of Section 2. The In-N-Out model f̂ = ĝ ◦ ĥout optimizes the empirical risk on labeled and pseudolabeled data:
ĝ=argmin g
(1−λ)R̂trans(ĥout,g)+λR̂st(ĥout,f̂in,g) (11)
where R̂st(ĥout,f̂in,g)= 1m1 ∑m1 i=1`(g(ĥout(x id i )),f̂in(x id i ,z id i )) is the loss of self-training on pseudolabels from the aux-inputs model, and λ ∈ [0,1] is a hyperparameter that trades off between labeled and pseudolabeled losses. In our experiments, we fine-tune ĝ and ĥout together.
Theoretical setup. Because fine-tuning is difficult to analyze theoretically, we analyze a slightly modified version of In-N-Out where we train an aux-inputs model to predict y given the features ĥout(x) and auxiliary information z, so the aux-inputs model ĝin : Rk × RT → R is given by ĝin = argming 1 n ∑n i=1`(g(ĥout(xi),zi),yi). The population self-training loss on pseudolabels from the aux-inputs model ĝin ◦ ĥout is: Rst(ĥout,ĝin,g) = Ex,z∼Pid [`(g(ĥout(x)),ĝin(ĥout(x),z))], and we minimize the self-training loss: ĝ=argmingRst(ĥout,ĝin,g). At test time given input x,z the In-N-Out model predicts ĝ(ĥout(x)). For the theory, we assume all models (ĝin,ĝ,andĥout) are linear.
4.1 IN-N-OUT IMPROVES OVER PRE-TRAINING UNDER ARBITRARY COVARIATE SHIFT
We prove that In-N-Out helps on top of pre-training, as long as the auxiliary features give us information about y relative to the noise in-distribution—that is, if σ2u is much larger than σ 2.
To build intuition, first consider the special case where the noise σ2 = 0 (equivalently, = 0). Since u can be recovered fromw and z, we can write y as a linear function ofw and z: y=γ>ww+γ > z z. We train an aux-inputs model ĝin fromw,z to y on finite labeled data. Since there is no noise, ĝin predicts y perfectly from w,z (we learn γw and γz). We use ĝin to pseudolabel a large amount of unlabeled data, and since ĝin predicts y perfectly fromw,z, the pseudolabels are perfect. So here pseudolabeling gives us a much larger and correctly labeled dataset to train the In-N-Out model on.
The technical challenge is proving that self-training helps under arbitrary covariate shift even when the noise is non-zero (σ2 > 0), so the aux-inputs model ĝin that we learn is accurate but not perfect.
In this case, the pseudolabels have an error which propagates to the In-N-Out model self-trained on these pseudolabels, but we want to show that the error is lower than for the aux-outputs model. The error in linear regression is proportional to the noise of the target y, which for the aux-outputs model is σ2 +σ2u. We show that the In-N-Out model uses the aux-inputs model to reduce the dependence on the noise σ2u, because the aux-inputs model uses both w and z to predict y. The proof reduces to showing that the max singular value for the In-N-Out error matrix is less than the min-singular value of the aux-outputs error matrix with high probability. A core part of the argument is to lower bound the min-singular value of a random matrix (Lemma 3). This uses techniques from random matrix theory (see e.g., Chapter 2.7 in Tao (2012)); the high level idea is to show that with probability 1−δ each column of the random matrix has a (not too small) component orthogonal to all other columns.
Theorem 2. In the linear setting, for all problem settings S with σ2u > 0, test distributions P ′x,P ′u, n≥m+d number of training points, and δ>0, there exists a,b>0 such that for all noise distributions P , with probability at least 1−δ over the training examples and test example x′∼P ′x, the ratio of the excess risks (for all σ2 small enough that a−bσ2>0) is:
Roodin-out−R∗
Roodout −R∗ ≤ σ
2
a−bσ2 (12)
Here R∗ = ming∗,h∗Ex′,y′,z′∼P ′ [`(g∗(h∗(x′)),y′)] is the min. possible (Bayes-optimal) OOD risk, Roodin-out = Ey′∼P ′y′|x′ [`(ĝ(ĥout(x ′)), y′)] is the risk of the In-N-Out model on test example x′, and Roodout =Ey′∼P ′y′|x′ [`(ĝy-out(ĥout(x ′)),y′)] is the risk of the aux-outputs model on test example x′. Note thatRoodin-out andR ood out are random variables that depend on the test input x ′ and the training setX .
Remark 1. As σ→ 0, the excess risk ratio of In-N-Out to Aux-outputs goes to 0, so the In-N-Out estimator is much better than the aux-outputs estimator.
The proof of the result is in Appendix A.
5 EXPERIMENTS
We show on real-world datasets for land cover and cropland prediction that aux-inputs can hurt OOD performance, while aux-outputs improve OOD performance. In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over other models on all datasets (Section 5.2). Secondly, we show that the tradeoff between in-distribution and OOD performance depends on the choice of auxiliary information on CelebA and cropland prediction (Section 5.3). Finally, we show that OOD unlabeled examples are important for improving OOD robustness (Section 5.4).
5.1 EXPERIMENTAL SETUP
We give a summary of considered datasets and setup here — see Figure 3 and Appendix B for details. Our datasets use auxiliary information both derived from the input (CelebA, Cropland) and from other sources (Landcover).
CelebA. In CelebA (Liu et al., 2015), the input x is a RGB image (resized to 64×64), the target y is a binary label for gender, and the auxiliary information z are 7 (of 40) binary-valued attributes derived from the input (e.g., presence of makeup, beard). We designate the set of images where the celebrity is wearing a hat as OOD. We use a ResNet18 as the backbone model architecture for all models (see Appendix B.1 for details).
Cropland. Crop type or cropland prediction is an important intermediate problem for crop yield prediction (Cai et al., 2018; Johnson et al., 2016; Kussul et al., 2017). The input x is a 50× 50 RGB image taken by a satellite, the target y is a binary label that is 1 when the image contains majority cropland, and the auxiliary information z is the center location coordinate plus 50× 50 vegetation-related bands. The vegetation bands in the auxiliary information z is derived from the original satellite image, which contains both RGB and other frequency bands. We use the Cropland dataset from Wang et al. (2020), with data from the US Midwest. We designate Iowa, Missouri, and Illinois as in-distribution and Indiana and Kentucky as OOD. Following Wang et al. (2020), we use a U-Net-based model (Ronneberger et al., 2015). See Appendix B.2 for details.
Landcover. Land cover prediction involves classifying the land cover type (e.g., “grasslands”) from satellite data at a location (Gislason et al., 2006; Rußwurm et al., 2020)). The input x is a time series measured by NASA’s MODIS satellite (Vermote, 2015), the target y is one of 6 land cover classes, and the auxiliary information z is climate data (e.g., temperature) from ERA5, a dataset computed from various satellites and weather station data (C3S, 2017). We designate non-African locations as in-distribution and Africa as OOD. We use a 1D-CNN to handle the temporal structure in the MODIS data. See Appendix B.3 for details.
Data splits. We first split off the OOD data, then split the rest into training, validation, and in-distribution test (see Appendix B for details). We use a portion of the training set and OOD set as in-distribution and OOD unlabeled data respectively. The rest of the OOD set is held out as test data. We run 5 trials, where we randomly re-generate the training/unlabeled split for each trial (keeping held-out splits fixed). We use a reduced number of labeled examples from each dataset (1%, 5%, 10% of labeled examples for CelebA, Cropland, and Landcover respectively), with the rest as unlabeled.
Repeated self-training. In our experiments, we also consider augmenting In-N-Out models with repeated self-training, which has fueled recent improvements in both domain adaptation and ImageNet classification (Shu et al., 2018; Xie et al., 2020). For one additional round of repeated self-training, we use the In-N-Out model to pseudolabel all unlabeled data (both ID and OOD) and also initialize the weights with the In-N-Out model. Each method is trained with early-stopping and hyperparameters are chosen using the validation set.
5.2 MAIN RESULTS
Table 1 compares the in-distribution (ID) and OOD accuracy of different methods. In all datasets, pretraining with aux-outputs improves OOD performance over the baseline, and In-N-Out (with or without repeated ST) generally improves both in- and out-of-distribution performance over all other models.
CelebA. In CelebA, using auxiliary information either as aux-inputs or outputs improves both ID (2–4%) and OOD accuracy (5%). We hypothesize this is because the auxiliary information is quite robust. Figure 4 shows that there is a significant correlation (r=0.72) between ID and OOD accuracy for 100 different sets of aux-inputs, supporting results on standard datasets (Recht et al., 2019; Xie et al., 2020; Santurkar et al., 2020). In-N-Out achieves the best OOD performance and comparable ID performance even though there is no tradeoff between ID and OOD accuracy.
Remote sensing. In the remote sensing datasets, aux-inputs can induce a tradeoff where increasing ID accuracy hurts OOD performance. In cropland prediction, even with a small geographic shift (US Midwest), the baseline model has a significant drop from ID to OOD accuracy (4%). The aux-inputs model improves ID accuracy almost 1% above the baseline but OOD accuracy drops 6%. In land cover prediction, using climate information as aux-inputs decreases OOD accuracy by over 4% compared to the baseline. The aux-outputs model improves OOD, but decreases ID accuracy by 3% over the baseline.
90.0 90.5 91.0 91.5 92.0 92.5 In-distribution accuracy
74
75
76
77
78
OO D
ac cu
ra cy
Figure 5: In-distribution vs. OOD accuracy on CelebA when sequentially adding a random set of 15 auxiliary inputs one-by-one. Even if adding all 15 auxiliary inputs improves both in-distribution and OOD accuracy, some intermediate in-distribution gains can hurt OOD.
ID Test Acc OOD Test Acc
Only in-distribution 69.73± 0.51 57.73± 1.58 Only OOD 69.92± 0.41 59.28± 1.01 Both 70.07± 0.46 59.84± 0.98
Table 2: Ablation study on the use of indistribution vs. OOD unlabeled data in pre-training models on Landcover, where unlabeled sample size is standardized (much smaller than Table 1). Using OOD unlabeled examples are important for gains in OOD accuracy (%). Results are shown with 90% error intervals over 5 trials.
Improving in-distribution accuracy over aux-outputs. One of the main goals of the self-training step in In-N-Out is to improve the in-distribution performance of the aux-outputs model. We compare to oracle models that use a large amount of in-distribution labeled data to compare the gains from In-N-Out. In Landcover, the oracle model which uses 160k labeled ID examples gets 80.5% accuracy. In-N-Out uses 16k labeled examples and 150k unlabeled ID examples (with 50k unlabeled OOD examples) and improves the ID accuracy of aux-output from 72.5% to 77.4%, closing most (62%) of the gap. In Cropland, the oracle model achieves 95.6% accuracy. Here, In-N-Out closes 80% of the gap between aux-outputs and the oracle, improving ID accuracy from 95.1% to 95.5%.
Ablations with only pre-training or self-training. We analyze the individual contributions of selftraining and pre-training in In-N-Out. On both cropland and land cover prediction, In-N-Out outperforms standard self-training on pseudolabels from the aux-inputs model (In-N-Out without pre-training), especially on OOD performance, where In-N-Out improves by about 1% and 2% respectively. Similarly, In-N-Out improves upon pre-training (aux-outputs model) both ID and OOD for both datasets.
5.3 CHOICE OF AUXILIARY INPUTS MATTERS
We find that the choice of auxiliary inputs affects the tradeoff between ID and OOD performance significantly, and thus is important to consider for problems with distribution shift. While Figure 4 shows that auxiliary inputs tend to simultaneously improve ID and OOD accuracy in CelebA, our theory suggests that in the worst case, there should be auxiliary inputs that worsen OOD accuracy. Indeed, Figure 5 shows that when taking a random set of 15 auxiliary inputs and adding them sequentially as auxiliary inputs, there are instances where an extra auxiliary input improves in-distribution but hurts OOD accuracy even if adding all 15 auxiliary inputs improves both ID and OOD accuracy. In cropland prediction, we compare using location coordinates and vegetation data as auxiliary inputs with only using vegetation data. The model with locations achieves the best ID performance, improving almost 1% in-distribution over the baseline with only RGB. Without locations (only vegetation data), the ID accuracy is similar to the baseline but the OOD accuracy improves by 1.5%. In this problem, location coordinates help with in-distribution interpolation, but the model fails to extrapolate to new locations.
5.4 OOD UNLABELED DATA IS IMPORTANT FOR PRE-TRAINING
We compare the role of in-distribution vs. OOD unlabeled data in pre-training. Table 2 shows the results of using only in-distribution vs. only OOD vs. a balanced mix of unlabeled examples for pre-training on the Landcover dataset, where unlabeled sample size is standardized across the models (by reducing to the size of the smallest set, resulting in 4x less unlabeled data). Using only in-distribution unlabeled examples does not improve OOD accuracy, while having only OOD unlabeled examples does well both in-distribution and OOD since it also has access to the labeled in-distribution data. For the same experiment in cropland prediction, the differences were not statistically significant, perhaps due to the smaller geographic shift (across states in cropland vs. continents in landcover).
6 RELATED WORK
Multi-task learning and weak supervision. Caruana and de Sa (2003) proposed using noisy features (aux-outputs) as a multi-task output, but do not theoretically analyze this approach. Wu et al. (2020) also study multi-task linear regression. However, their auxiliary tasks must have true parameters that are closely aligned (small cosine distance) to the target task. Similarly, weak supervision works assume access to weak labels correlated with the true label (Ratner et al., 2016; 2017). In our paper,
we make no assumptions about the alignment of the auxiliary and target tasks beyond a shared latent variable while also considering distribution shifts.
Transfer learning, pre-training, and self-supervision. We support empirical works that show the success of transfer learning and pre-training in vision and NLP (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; Devlin et al., 2019). Theoretically, Du et al. (2020); Tripuraneni et al. (2020) study pre-training in a similar linear regression setup. They show in-distribution generalization bound improvements, but do not consider OOD robustness or combining with auxiliary inputs. Hendrycks et al. (2019b) shows empirically that self-supervision can improve robustness to synthetic corruptions. We support these results by showing theoretical and empirical robustness benefits for pre-training on auxiliary information, which can be derived from the original input as in self-supervision.
Self-training for robustness. Raghunathan et al. (2020) analyze robust self-training (RST) (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019), which improves the tradeoff between standard and adversarially robust accuracy, in min-norm linear regression. Khani and Liang (2021) show how to use RST to make a model robust against a predefined spurious feature without losing accuracy. While related, we work in multi-task linear regression, study pre-training, and prove robustness to arbitrary covariate shifts. Kumar et al. (2020) show that repeated self-training on gradually shifting unlabeled data can enable adaptation over time. In-N-Out is complementary and may provide better pseudolabels in each step of this method. Chen et al. (2020) show that self-training can remove spurious features for Gaussian input features in linear models, whereas our results hold for general input distributions (with density). Zoph et al. (2020) show that self-training and pre-training combine for in-distribution gains. We provide theory to support this and also show benefits for OOD robustness.
Domain adaptation. Domain adaptation works account for covariate shift by using unlabeled data from a target domain to adapt the model (Blitzer and Pereira, 2007; Daumé III, 2007; Shu et al., 2018; Hoffman et al., 2018; Ganin et al., 2016). Often, modern domain adaptation methods (Shu et al., 2018; Hoffman et al., 2018) have a self-training or entropy minimization component that benefits from having a better model in the target domain to begin with. Similarly, domain adversarial methods (Ganin et al., 2016) rely on the inductive bias of the source-only model to correctly align the source and target distributions. In-N-Out may provide a better starting point for these domain adaptation methods.
7 DISCUSSION
Using spurious features for robustness. Counterintuitively, In-N-Out uses potentially spurious features (the auxiliary information, which helps in-distribution but hurts OOD accuracy) to improve OOD robustness. This is in contrast to works on removing spurious features from the model (Arjovsky et al., 2019; Ilyas et al., 2019; Chen et al., 2020). In-N-Out promotes utilizing all available information by leveraging spurious features as useful in-distribution prediction signals rather than throwing them away.
General robustness with unlabeled data. In-N-Out is an instantiation of a widely applicable paradigm for robustness: collect unlabeled data in all parts of the input space and learn better representations from the unlabeled data before training on labeled data. This paradigm has driven large progress in few-shot generalization in vision (Hendrycks et al., 2019a;b) and NLP (Devlin et al., 2019; Brown et al., 2020). In-N-Out enriches this paradigm by proposing that some features of the collected data can be used as input and output simultaneously, which results in robustness to arbitrary distribution shifts.
Leveraging metadata and unused features in applications. Many applications have inputs indexed by metadata such as location coordinates or timestamps (Christie et al., 2018; Yeh et al., 2020; Ni et al., 2019). We can use such metadata to join (in a database sense) other auxilary data sources on this metadata for use in In-N-Out. This auxiliary information may often be overlooked or discarded, but In-N-Out provides a way to incorporate them to improve both in- and out-of-distribution accuracy.
Division between input features and auxiliary information. While a standard division between inputs and auxiliary information may exist in some domains, In-N-Out applies for any division of the input. An important further question is how to automatically choose this division under distribution shifts.
8 CONCLUSION
We show that while auxiliary information as inputs improve in-distribution and OOD on standard curated datasets, they can hurt OOD in real-world datasets. In contrast, we show that using auxiliary information as outputs by pretraining improves OOD performance. In-N-Out combines the strengths of auxiliary inputs and outputs for further improvements both in- and out-of-distribution.
9 ACKNOWLEDGEMENTS
We thank Sherrie Wang and Andreas Schlueter for their help in procuring remote sensing data, Daniel Levy for his insight in simplifying the proof of Theorem 1, Albert Gu for a key insight in proving Lemma 3 using tools from random matrix theory, as well as Shyamal Buch, Pang Wei Koh, Shiori Sagawa, and anonymous reviewers for their valuable help and comments. This work was supported by an Open Philanthropy Project Award, an NSF Frontier Award as part of the Center for Trustworthy Machine Learning (CTML). SMX was supported by an NDSEG Fellowship. AK was supported by a Stanford Graduate Fellowship. TM was partially supported by the Google Faculty Award, JD.com, Stanford Data Science Initiative, and the Stanford Artificial Intelligence Laboratory.
10 REPRODUCIBILITY
All code, data, and experiments are on CodaLab at this link. | 1. What is the focus of the paper regarding remote sensing applications?
2. What are the three baselines/ablations presented by the authors?
3. What is the concern regarding Remark 1 in Section 4.1?
4. How does the reviewer assess the contribution, clarity, and empirical improvements of the paper?
5. Are there any limitations or areas for improvement in the paper? | Review | Review
This paper investigates how to use auxiliary information to improve classification performance when few labeled examples are available. As the introduction makes clear, this is an especially important problem area for remote sensing applications, where labels are scarce for many inputs (e.g. satellite photos from countries/regions without much annotation).
The authors present three intuitively plausible baselines/ablations, two of which use auxiliary information, and explain the benefits and downsides of each. For instance, regarding the aux-inputs baseline: "the relationship between the aux-inputs and the target can shift significantly OOD, worsening the OOD error". These claims are later supported with theory using linear models. The theory is presented nicely, improves understanding, and is believable.
Their proposed method is very similar to the aux-out baseline/ablation. The only difference is that they fine-tune on pseudo-labeled in-distribution examples in their method. Seeing as this baseline could also be thought of as an ablation, and taking into account the improved performance of the full In-N-Out method, it is not too worrying. However, I am concerned about Remark 1 in Section 4.1, which says "We train an aux-inputs model gˆin from w,z to y on finite labeled data—since the noise σ 2 = E[ 2 ] is small this model is very accurate." If In-N-Out only improves performance when aux-in is a nearly perfect generator of pseudo-labels for in-distribution data, then doesn't this imply that aux-out would learn just as much from the GT labeled in-distribution examples? How are the pseudo-labels actually helping?
Pros:
Underexplored, important problem area
Good clarity of writing and paper structure, including theoretical sections
Instructive choice of baselines/ablations
Empirical improvements from the proposed method
Not just vision datasets; they use a time series dataset as well
Theory in the case of linear models to improve understanding
The related work seems appropriate
Cons:
The proposed method is very similar to one of the baselines/ablations, and I am not certain that there is a meaningful difference between them
There is no comparison to previously published work that uses auxiliary information for classification. Perhaps there are no suitable baselines from prior work, but it is not clear from the paper that this is the case.
======================================================
Update after rebuttal:
The authors have addressed my concerns. Contrary to my initial understanding, the paper builds off of prior work in a methodical way, and the pseudolabeling stage of In-N-Out makes more sense now. I have raised my score from 6 to 7. |
ICLR | Title
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Abstract
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both inand out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best leverage this auxiliary information for the prediction task? Empirically across three image and time-series datasets, and theoretically in a multi-task linear regression setting, we show that (i) using auxiliary information as input features improves in-distribution error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error.
1 INTRODUCTION
When models are tested on distributions that are different from the training distribution, they typically suffer large drops in performance (Blitzer and Pereira, 2007; Szegedy et al., 2014; Jia and Liang, 2017; AlBadawy et al., 2018; Hendrycks et al., 2019a). For example, in remote sensing, central tasks include predicting poverty, crop type, and land cover from satellite imagery for downstream humanitarian, policy, and environmental applications (Xie et al., 2016; Jean et al., 2016; Wang et al., 2020; Rußwurm et al., 2020). In some developing African countries, labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys (Jean et al., 2016). To make accurate predictions in these countries, we must extrapolate to out-of-distribution (OOD) examples across different geographic terrains and political borders.
We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution (e.g., global satellite imagery). While labels are scarce, auxiliary information is often cheaply available for every input and may provide some signal for the missing labels. Auxiliary information can come from additional data sources (e.g., climate data from other satellites) or derived from the original input (e.g., background or non-visible spectrum image channels). This auxiliary information is often discarded or not leveraged, and how to best use them is unclear. One way is to use them directly as input features (aux-inputs); another is to treat them as prediction outputs for an auxiliary task (aux-outputs) in pre-training. Which approach leads to better in-distribution or OOD performance?
Aux-inputs provide more features to potentially improve in-distribution performance, and one may hope that this also improves OOD performance. Indeed, previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy (Recht et al., 2019; Taori et al., 2020; Xie et al., 2020; Santurkar et al., 2020). However, in this paper we find that aux-inputs can introduce more spurious correlations with the labels: as a result, while aux-inputs often improve in-distribution accuracy, they can worsen OOD accuracy. We give examples of this trend on CelebA (Liu et al., 2015) and real-world satellite datasets in Sections 5.2 and 5.3.
Conversely, aux-output methods such as pre-training may improve OOD performance through auxiliary supervision (Caruana, 1997; Weiss et al., 2016; Hendrycks et al., 2019a). Hendrycks et al.
∗Equal contribution.
𝑥
𝑧
𝑤
𝑦
𝑢
𝐵∗
𝐴∗ 𝐶∗ 𝜃" 𝜃#
Figure 2: Graphical model for our theoretical setting: prediction task with input x, target y, and auxiliary information z, which is related to y through the latent variable w and latent noise u.
(2019a) show that pre-training on ImageNet can improve adversarial robustness, and Hendrycks et al. (2019b) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions. In this paper, we find that while aux-outputs improve OOD accuracy, the in-distribution accuracy is worse than with aux-inputs. Thus, we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs.
To theoretically study how to best use auxiliary information, we extend the multi-task linear regression setting (Du et al., 2020; Tripuraneni et al., 2020) to allow for distribution shifts. We show that auxiliary information helps in-distribution error by providing useful features for predicting the target, but the relationship between the aux-inputs and the target can shift significantly OOD, worsening the OOD error. In contrast, the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space. We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information.
Can we do better than using auxiliary information as inputs or outputs alone? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs (Figure 1). In-N-Out first uses an aux-inputs model, which has good in-distribution accuracy, to pseudolabel in-distribution unlabeled data. It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data. We prove that In-N-Out, which combines self-training and pre-training, further improves both in-distribution and OOD error over the aux-outputs model.
We show empirical results on CelebA and two remote sensing tasks (land cover and cropland prediction) that parallel the theory. On all datasets, In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2% in-distribution, 2–3% OOD over not using auxiliary information on remote sensing tasks. Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone (up to 5% in-distribution, 1–2% OOD on remote sensing tasks). We also find that using OOD (rather than in-distribution) unlabeled examples for pre-training is crucial for OOD improvements.
2 SETUP
Let x∈Rd be the input (e.g., a satellite image), y ∈R be the target (e.g., crop type), and z ∈RT be the cheaply obtained auxiliary information either from additional sources (e.g., climate information) or derived from the original data (e.g., background).
Training data. Let Pid and Pood denote the underlying distribution of (x,y,z) triples in-distribution and out-of-distribution, respectively. The training data consists of (i) in-distribution labeled data {(xi, yi, zi)}ni=1 ∼ Pid, (ii) in-distribution unlabeled data {(xidi , zidi )} mid i=1 ∼ Pid, and (iii) out-of-distribution unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood.
Goal and risk metrics. Our goal is to learn a model from input and auxiliary information to the target, f :Rd×RT →R. For a loss function `, the in-distribution population risk of the model f is Rid(f)=Ex,y,z∼Pid [`(f(x,z),y)], and its OOD population risk isRood(f)=Ex,y,z∼Pood [`(f(x,z),y)].
2.1 MODELS
We consider three common ways to use the auxiliary information (z) to learn a model.
Baseline. The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information (accomplished by setting z to 0):
f̂bs =argmin f
1
n n∑ i=1 `(f(xi,0),yi). (1)
Aux-inputs. The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features:
f̂in =argmin f
1
n n∑ i=1 `(f(xi,zi),yi). (2)
Aux-outputs. The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task, in hopes that there is a low-dimensional feature representation that is common to predicting both z and y. Training the aux-outputs model consists of two steps:
In the pre-training step, we use all the unlabeled data to learn a shared feature representation. Let h :Rd→Rk denote a feature map and gz-out :Rk→RT denote a mapping from feature representation to the auxiliary outputs. Let `aux denote the loss function for the auxiliary information. We define the empirical risk of h and gz-out as:
R̂pre(h,gz-out)= 1
mid+mood (mid∑ i=1 `aux(gz-out(h(x id i )),z id i )+ mood∑ i=1 `aux(gz-out(h(x ood i )),z ood i ) ) . (3)
The estimate of the feature map is ĥout =argminhmingz-outR̂pre(h,gz-out).
In the transfer step, the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out :Rk→R from feature representation to target y. We define the transfer empirical risk as:
R̂trans(ĥout,gy-out)= 1
n n∑ i=1 `(gy-out(ĥout(xi)),yi) (4)
The estimate of the target mapping is ĝy-out = argmingy-out R̂trans(ĥout,gy-out). The final aux-outputs model is
f̂out(x,z)= ĝy-out(ĥout(x)). (5)
Like the baseline model, the aux-outputs model ignores the auxiliary information for prediction.
3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS
We now analyze the baseline, aux-inputs, and aux-outputs models introduced in Section 2. Our setup extends a linear regression setting commonly used for analyzing multi-task problems (Du et al., 2020; Tripuraneni et al., 2020).
Setup. See Figure 2 for the graphical model. Letw=B?x∈Rk be a low-dimensional latent feature (k≤d) shared between auxiliary information z and the target y. Let u∈Rm denote unobserved latent variables not captured in x. We assume z and y are linear functions of u andw:
y=θ>ww+θ > u u+ , (6) z=A?w+C?u, (7)
where ∼ P denotes noise with mean 0 and variance σ2. As in Du et al. (2020), we assume the dimension of the auxiliary information T is greater than the feature dimension k, that is T ≥k, and thatA?,B? andC? have full rank (rank k). We also assume T ≥m, wherem is the dimension of u. Data. Let Px and Pu denote the distribution of x and u in-distribution (ID), and let P ′x, P ′u denote the distribution x and uOOD. We assume x and u are independent, have distributions with bounded density everywhere, and have invertible covariance matrices. We assume the mean of u is zero in-
and out-of-distribution1. We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD, a common assumption in unsupervised domain adaptation theory (Sugiyama et al., 2007; Kumar et al., 2020; Raghunathan et al., 2020).
Loss metrics. We use the squared loss for the target and auxiliary losses: `(ŷ,y) = (y− ŷ)2 and `aux(z,z ′)=‖z−z′‖22.
Models. We assume all model families (f , h, gz-out, gy-out) in Section 2 are linear.
Let S=(A?,B?,C?,θw,θu,Px,Pu) denote a problem setting which satisfies all the above assumptions.
3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION, BUT CAN HURT OOD
We first show that the aux-inputs model (2) performs better than the baseline model (1) in-distribution. Intuitively, the target y depends on both the inputs x (throughw) and latent variable u (Figure 2). The baseline model only uses x to predict y; thus it cannot capture the variation in y due to u. On the other hand, the aux-inputs model uses x and z to predict y. Since z is a function of x (through w) and u, u can be recovered from x and z by inverting this relation. Note that u is unobserved but implicitly recovered. The aux-inputs model can then combine u and x to predict y better.
Let σ2u=Eu∼Pu [(θ>u u)2] denote the (in-distribution) variance of y due to the latent variables u. The following proposition shows that if σ2u>0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2
Proposition 1. For all problem settings S, P , assuming regularity conditions (bounded x, u, sub-Gaussian noise , and T =m), and σ2u>0, for all δ>0, there existsN such that for n≥N number of training points, with probability at least 1−δ over the training examples, the aux-inputs model improves over the baseline:
Rid(f̂in)<Rid(f̂bs). (8)
Although using z as input leads to better in-distribution performance, we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples. Intuitively, the aux-inputs model uses z, which can be unreliable OOD because z depends on u and u can shift OOD. In more detail, the aux-inputs model learns to predict ŷ= θ̂>x,inx+θ̂ > z,inz, where the true output y=θ>x x+θ > z z, and θ̂z,in is an approximation to the true parameter θz , that has some error. Out-of-distribution u and hence z can have very high variance, which would magnify (θ̂z,in−θz)>z and lead to bad predictions.
Example 1. There exists a problem setting S , P , such that for every n, there is some test distribution P ′x,P ′ u with:
E[Rood(f̂in)]>E[Rood(f̂bs)] (9)
3.2 PRE-TRAINING IMPROVES RISK UNDER ARBITRARY COVARIATE SHIFT
While using z as inputs (aux-inputs) can worsen performance relative to the baseline, our first main result is that the aux-outputs model (which pre-trains to predict z from x, and then transfers the learned representation to predict y from x) outperforms the baseline model for all test distributions.
Intuition. Referring to Figure 2, we see that the mapping from inputs x to auxiliary z passes through the lower dimensional features w. In the pre-training step, the aux-outputs model predicts z from x using a low rank linear model, and we show that this recovers the ‘bottleneck’ features w (up to symmetries; more formally we recover the rowspace of B?). In the transfer step, the aux-outputs model learns a linear map from the lower-dimensionalw to y, while the baseline predicts y directly from x. To warm up, without distribution shift, the expected excess risk only depends on the dimension of the input, and not the conditioning. That is, the expected excess risk in linear regression is exactly dσ2/n, where d is the input dimension, so the aux-outputs trivially improves over the baseline since dim(w)<dim(x). In contrast, the worst case risk under distribution shift depends on the conditioning of the data, which could be worse for w than x. Our proof shows that the worst case risk (over all x and u) is still better for the aux-outputs model because projecting to the low-dimensional feature representation “zeroes-out” some error directions.
1This is not limiting because bias in z can be folded into x. 2Since z is typically low-dimensional and x is high-dimensional (e.g., images), the aux-inputs model needs
only a slightly larger number of examples before it outperforms the baseline.
Algorithm 1 In-N-Out Require: in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid,
in-distribution unlabeled data {(xidi ,zidi )} mid i=1∼Pid, OOD unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood
1: Learn f̂in : (x,z) 7→y from in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid 2: Pre-train gz-out◦ĥout :x 7→z on aux-outputs from all unlabeled data {(xidi ,zidi )} mid i=1∪{(x ood i ,z ood i )} mood i=1 3: Return f̂= ĝ◦ĥout :x 7→y trained on labeled and pseudolabeled data {(xi,yi)}ni=1∪{(xidi ,f̂in(xidi ,zidi )} mid i=1
Theorem 1. For all problem settings S , noise distributionsP , test distributionsP ′x ,P ′u, and n≥m+d number of training points:
E[Rood(f̂out)]≤E[Rood(f̂bs)]. (10)
See Appendix A for the proof.
4 IN-N-OUT: COMBINING AUXILIARY INPUTS AND OUTPUTS
We propose the In-N-Out algorithm, which combines both the aux-inputs and aux-outputs models for further complementary gains (Figure 1). As a reminder: (i) The aux-inputs model (x,z→y) is good in-distribution, but bad OOD because z can be misleading OOD. (ii) The aux-outputs model (x→y) is better than the baseline OOD, but worse than aux-inputs in-distribution because it doesn’t use z. (iii) We propose the In-N-Out model (x→y), which uses pseudolabels from aux-inputs (stronger model) in-distribution to transfer in-distribution accuracy to the aux-outputs model. The In-N-Out model does not use z to make predictions since z can be misleading / spurious OOD.
In more detail, we use the aux-inputs model (which is good in-distribution) to pseudolabel in-distribution unlabeled data. The pseudolabeled data provides more effective training samples (self-training) to fine-tune an aux-outputs model pre-trained on predicting auxiliary information from all unlabeled data. We present the general In-N-Out algorithm in Algorithm 1 and analyze it in the linear multi-task regression setting of Section 2. The In-N-Out model f̂ = ĝ ◦ ĥout optimizes the empirical risk on labeled and pseudolabeled data:
ĝ=argmin g
(1−λ)R̂trans(ĥout,g)+λR̂st(ĥout,f̂in,g) (11)
where R̂st(ĥout,f̂in,g)= 1m1 ∑m1 i=1`(g(ĥout(x id i )),f̂in(x id i ,z id i )) is the loss of self-training on pseudolabels from the aux-inputs model, and λ ∈ [0,1] is a hyperparameter that trades off between labeled and pseudolabeled losses. In our experiments, we fine-tune ĝ and ĥout together.
Theoretical setup. Because fine-tuning is difficult to analyze theoretically, we analyze a slightly modified version of In-N-Out where we train an aux-inputs model to predict y given the features ĥout(x) and auxiliary information z, so the aux-inputs model ĝin : Rk × RT → R is given by ĝin = argming 1 n ∑n i=1`(g(ĥout(xi),zi),yi). The population self-training loss on pseudolabels from the aux-inputs model ĝin ◦ ĥout is: Rst(ĥout,ĝin,g) = Ex,z∼Pid [`(g(ĥout(x)),ĝin(ĥout(x),z))], and we minimize the self-training loss: ĝ=argmingRst(ĥout,ĝin,g). At test time given input x,z the In-N-Out model predicts ĝ(ĥout(x)). For the theory, we assume all models (ĝin,ĝ,andĥout) are linear.
4.1 IN-N-OUT IMPROVES OVER PRE-TRAINING UNDER ARBITRARY COVARIATE SHIFT
We prove that In-N-Out helps on top of pre-training, as long as the auxiliary features give us information about y relative to the noise in-distribution—that is, if σ2u is much larger than σ 2.
To build intuition, first consider the special case where the noise σ2 = 0 (equivalently, = 0). Since u can be recovered fromw and z, we can write y as a linear function ofw and z: y=γ>ww+γ > z z. We train an aux-inputs model ĝin fromw,z to y on finite labeled data. Since there is no noise, ĝin predicts y perfectly from w,z (we learn γw and γz). We use ĝin to pseudolabel a large amount of unlabeled data, and since ĝin predicts y perfectly fromw,z, the pseudolabels are perfect. So here pseudolabeling gives us a much larger and correctly labeled dataset to train the In-N-Out model on.
The technical challenge is proving that self-training helps under arbitrary covariate shift even when the noise is non-zero (σ2 > 0), so the aux-inputs model ĝin that we learn is accurate but not perfect.
In this case, the pseudolabels have an error which propagates to the In-N-Out model self-trained on these pseudolabels, but we want to show that the error is lower than for the aux-outputs model. The error in linear regression is proportional to the noise of the target y, which for the aux-outputs model is σ2 +σ2u. We show that the In-N-Out model uses the aux-inputs model to reduce the dependence on the noise σ2u, because the aux-inputs model uses both w and z to predict y. The proof reduces to showing that the max singular value for the In-N-Out error matrix is less than the min-singular value of the aux-outputs error matrix with high probability. A core part of the argument is to lower bound the min-singular value of a random matrix (Lemma 3). This uses techniques from random matrix theory (see e.g., Chapter 2.7 in Tao (2012)); the high level idea is to show that with probability 1−δ each column of the random matrix has a (not too small) component orthogonal to all other columns.
Theorem 2. In the linear setting, for all problem settings S with σ2u > 0, test distributions P ′x,P ′u, n≥m+d number of training points, and δ>0, there exists a,b>0 such that for all noise distributions P , with probability at least 1−δ over the training examples and test example x′∼P ′x, the ratio of the excess risks (for all σ2 small enough that a−bσ2>0) is:
Roodin-out−R∗
Roodout −R∗ ≤ σ
2
a−bσ2 (12)
Here R∗ = ming∗,h∗Ex′,y′,z′∼P ′ [`(g∗(h∗(x′)),y′)] is the min. possible (Bayes-optimal) OOD risk, Roodin-out = Ey′∼P ′y′|x′ [`(ĝ(ĥout(x ′)), y′)] is the risk of the In-N-Out model on test example x′, and Roodout =Ey′∼P ′y′|x′ [`(ĝy-out(ĥout(x ′)),y′)] is the risk of the aux-outputs model on test example x′. Note thatRoodin-out andR ood out are random variables that depend on the test input x ′ and the training setX .
Remark 1. As σ→ 0, the excess risk ratio of In-N-Out to Aux-outputs goes to 0, so the In-N-Out estimator is much better than the aux-outputs estimator.
The proof of the result is in Appendix A.
5 EXPERIMENTS
We show on real-world datasets for land cover and cropland prediction that aux-inputs can hurt OOD performance, while aux-outputs improve OOD performance. In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over other models on all datasets (Section 5.2). Secondly, we show that the tradeoff between in-distribution and OOD performance depends on the choice of auxiliary information on CelebA and cropland prediction (Section 5.3). Finally, we show that OOD unlabeled examples are important for improving OOD robustness (Section 5.4).
5.1 EXPERIMENTAL SETUP
We give a summary of considered datasets and setup here — see Figure 3 and Appendix B for details. Our datasets use auxiliary information both derived from the input (CelebA, Cropland) and from other sources (Landcover).
CelebA. In CelebA (Liu et al., 2015), the input x is a RGB image (resized to 64×64), the target y is a binary label for gender, and the auxiliary information z are 7 (of 40) binary-valued attributes derived from the input (e.g., presence of makeup, beard). We designate the set of images where the celebrity is wearing a hat as OOD. We use a ResNet18 as the backbone model architecture for all models (see Appendix B.1 for details).
Cropland. Crop type or cropland prediction is an important intermediate problem for crop yield prediction (Cai et al., 2018; Johnson et al., 2016; Kussul et al., 2017). The input x is a 50× 50 RGB image taken by a satellite, the target y is a binary label that is 1 when the image contains majority cropland, and the auxiliary information z is the center location coordinate plus 50× 50 vegetation-related bands. The vegetation bands in the auxiliary information z is derived from the original satellite image, which contains both RGB and other frequency bands. We use the Cropland dataset from Wang et al. (2020), with data from the US Midwest. We designate Iowa, Missouri, and Illinois as in-distribution and Indiana and Kentucky as OOD. Following Wang et al. (2020), we use a U-Net-based model (Ronneberger et al., 2015). See Appendix B.2 for details.
Landcover. Land cover prediction involves classifying the land cover type (e.g., “grasslands”) from satellite data at a location (Gislason et al., 2006; Rußwurm et al., 2020)). The input x is a time series measured by NASA’s MODIS satellite (Vermote, 2015), the target y is one of 6 land cover classes, and the auxiliary information z is climate data (e.g., temperature) from ERA5, a dataset computed from various satellites and weather station data (C3S, 2017). We designate non-African locations as in-distribution and Africa as OOD. We use a 1D-CNN to handle the temporal structure in the MODIS data. See Appendix B.3 for details.
Data splits. We first split off the OOD data, then split the rest into training, validation, and in-distribution test (see Appendix B for details). We use a portion of the training set and OOD set as in-distribution and OOD unlabeled data respectively. The rest of the OOD set is held out as test data. We run 5 trials, where we randomly re-generate the training/unlabeled split for each trial (keeping held-out splits fixed). We use a reduced number of labeled examples from each dataset (1%, 5%, 10% of labeled examples for CelebA, Cropland, and Landcover respectively), with the rest as unlabeled.
Repeated self-training. In our experiments, we also consider augmenting In-N-Out models with repeated self-training, which has fueled recent improvements in both domain adaptation and ImageNet classification (Shu et al., 2018; Xie et al., 2020). For one additional round of repeated self-training, we use the In-N-Out model to pseudolabel all unlabeled data (both ID and OOD) and also initialize the weights with the In-N-Out model. Each method is trained with early-stopping and hyperparameters are chosen using the validation set.
5.2 MAIN RESULTS
Table 1 compares the in-distribution (ID) and OOD accuracy of different methods. In all datasets, pretraining with aux-outputs improves OOD performance over the baseline, and In-N-Out (with or without repeated ST) generally improves both in- and out-of-distribution performance over all other models.
CelebA. In CelebA, using auxiliary information either as aux-inputs or outputs improves both ID (2–4%) and OOD accuracy (5%). We hypothesize this is because the auxiliary information is quite robust. Figure 4 shows that there is a significant correlation (r=0.72) between ID and OOD accuracy for 100 different sets of aux-inputs, supporting results on standard datasets (Recht et al., 2019; Xie et al., 2020; Santurkar et al., 2020). In-N-Out achieves the best OOD performance and comparable ID performance even though there is no tradeoff between ID and OOD accuracy.
Remote sensing. In the remote sensing datasets, aux-inputs can induce a tradeoff where increasing ID accuracy hurts OOD performance. In cropland prediction, even with a small geographic shift (US Midwest), the baseline model has a significant drop from ID to OOD accuracy (4%). The aux-inputs model improves ID accuracy almost 1% above the baseline but OOD accuracy drops 6%. In land cover prediction, using climate information as aux-inputs decreases OOD accuracy by over 4% compared to the baseline. The aux-outputs model improves OOD, but decreases ID accuracy by 3% over the baseline.
90.0 90.5 91.0 91.5 92.0 92.5 In-distribution accuracy
74
75
76
77
78
OO D
ac cu
ra cy
Figure 5: In-distribution vs. OOD accuracy on CelebA when sequentially adding a random set of 15 auxiliary inputs one-by-one. Even if adding all 15 auxiliary inputs improves both in-distribution and OOD accuracy, some intermediate in-distribution gains can hurt OOD.
ID Test Acc OOD Test Acc
Only in-distribution 69.73± 0.51 57.73± 1.58 Only OOD 69.92± 0.41 59.28± 1.01 Both 70.07± 0.46 59.84± 0.98
Table 2: Ablation study on the use of indistribution vs. OOD unlabeled data in pre-training models on Landcover, where unlabeled sample size is standardized (much smaller than Table 1). Using OOD unlabeled examples are important for gains in OOD accuracy (%). Results are shown with 90% error intervals over 5 trials.
Improving in-distribution accuracy over aux-outputs. One of the main goals of the self-training step in In-N-Out is to improve the in-distribution performance of the aux-outputs model. We compare to oracle models that use a large amount of in-distribution labeled data to compare the gains from In-N-Out. In Landcover, the oracle model which uses 160k labeled ID examples gets 80.5% accuracy. In-N-Out uses 16k labeled examples and 150k unlabeled ID examples (with 50k unlabeled OOD examples) and improves the ID accuracy of aux-output from 72.5% to 77.4%, closing most (62%) of the gap. In Cropland, the oracle model achieves 95.6% accuracy. Here, In-N-Out closes 80% of the gap between aux-outputs and the oracle, improving ID accuracy from 95.1% to 95.5%.
Ablations with only pre-training or self-training. We analyze the individual contributions of selftraining and pre-training in In-N-Out. On both cropland and land cover prediction, In-N-Out outperforms standard self-training on pseudolabels from the aux-inputs model (In-N-Out without pre-training), especially on OOD performance, where In-N-Out improves by about 1% and 2% respectively. Similarly, In-N-Out improves upon pre-training (aux-outputs model) both ID and OOD for both datasets.
5.3 CHOICE OF AUXILIARY INPUTS MATTERS
We find that the choice of auxiliary inputs affects the tradeoff between ID and OOD performance significantly, and thus is important to consider for problems with distribution shift. While Figure 4 shows that auxiliary inputs tend to simultaneously improve ID and OOD accuracy in CelebA, our theory suggests that in the worst case, there should be auxiliary inputs that worsen OOD accuracy. Indeed, Figure 5 shows that when taking a random set of 15 auxiliary inputs and adding them sequentially as auxiliary inputs, there are instances where an extra auxiliary input improves in-distribution but hurts OOD accuracy even if adding all 15 auxiliary inputs improves both ID and OOD accuracy. In cropland prediction, we compare using location coordinates and vegetation data as auxiliary inputs with only using vegetation data. The model with locations achieves the best ID performance, improving almost 1% in-distribution over the baseline with only RGB. Without locations (only vegetation data), the ID accuracy is similar to the baseline but the OOD accuracy improves by 1.5%. In this problem, location coordinates help with in-distribution interpolation, but the model fails to extrapolate to new locations.
5.4 OOD UNLABELED DATA IS IMPORTANT FOR PRE-TRAINING
We compare the role of in-distribution vs. OOD unlabeled data in pre-training. Table 2 shows the results of using only in-distribution vs. only OOD vs. a balanced mix of unlabeled examples for pre-training on the Landcover dataset, where unlabeled sample size is standardized across the models (by reducing to the size of the smallest set, resulting in 4x less unlabeled data). Using only in-distribution unlabeled examples does not improve OOD accuracy, while having only OOD unlabeled examples does well both in-distribution and OOD since it also has access to the labeled in-distribution data. For the same experiment in cropland prediction, the differences were not statistically significant, perhaps due to the smaller geographic shift (across states in cropland vs. continents in landcover).
6 RELATED WORK
Multi-task learning and weak supervision. Caruana and de Sa (2003) proposed using noisy features (aux-outputs) as a multi-task output, but do not theoretically analyze this approach. Wu et al. (2020) also study multi-task linear regression. However, their auxiliary tasks must have true parameters that are closely aligned (small cosine distance) to the target task. Similarly, weak supervision works assume access to weak labels correlated with the true label (Ratner et al., 2016; 2017). In our paper,
we make no assumptions about the alignment of the auxiliary and target tasks beyond a shared latent variable while also considering distribution shifts.
Transfer learning, pre-training, and self-supervision. We support empirical works that show the success of transfer learning and pre-training in vision and NLP (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; Devlin et al., 2019). Theoretically, Du et al. (2020); Tripuraneni et al. (2020) study pre-training in a similar linear regression setup. They show in-distribution generalization bound improvements, but do not consider OOD robustness or combining with auxiliary inputs. Hendrycks et al. (2019b) shows empirically that self-supervision can improve robustness to synthetic corruptions. We support these results by showing theoretical and empirical robustness benefits for pre-training on auxiliary information, which can be derived from the original input as in self-supervision.
Self-training for robustness. Raghunathan et al. (2020) analyze robust self-training (RST) (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019), which improves the tradeoff between standard and adversarially robust accuracy, in min-norm linear regression. Khani and Liang (2021) show how to use RST to make a model robust against a predefined spurious feature without losing accuracy. While related, we work in multi-task linear regression, study pre-training, and prove robustness to arbitrary covariate shifts. Kumar et al. (2020) show that repeated self-training on gradually shifting unlabeled data can enable adaptation over time. In-N-Out is complementary and may provide better pseudolabels in each step of this method. Chen et al. (2020) show that self-training can remove spurious features for Gaussian input features in linear models, whereas our results hold for general input distributions (with density). Zoph et al. (2020) show that self-training and pre-training combine for in-distribution gains. We provide theory to support this and also show benefits for OOD robustness.
Domain adaptation. Domain adaptation works account for covariate shift by using unlabeled data from a target domain to adapt the model (Blitzer and Pereira, 2007; Daumé III, 2007; Shu et al., 2018; Hoffman et al., 2018; Ganin et al., 2016). Often, modern domain adaptation methods (Shu et al., 2018; Hoffman et al., 2018) have a self-training or entropy minimization component that benefits from having a better model in the target domain to begin with. Similarly, domain adversarial methods (Ganin et al., 2016) rely on the inductive bias of the source-only model to correctly align the source and target distributions. In-N-Out may provide a better starting point for these domain adaptation methods.
7 DISCUSSION
Using spurious features for robustness. Counterintuitively, In-N-Out uses potentially spurious features (the auxiliary information, which helps in-distribution but hurts OOD accuracy) to improve OOD robustness. This is in contrast to works on removing spurious features from the model (Arjovsky et al., 2019; Ilyas et al., 2019; Chen et al., 2020). In-N-Out promotes utilizing all available information by leveraging spurious features as useful in-distribution prediction signals rather than throwing them away.
General robustness with unlabeled data. In-N-Out is an instantiation of a widely applicable paradigm for robustness: collect unlabeled data in all parts of the input space and learn better representations from the unlabeled data before training on labeled data. This paradigm has driven large progress in few-shot generalization in vision (Hendrycks et al., 2019a;b) and NLP (Devlin et al., 2019; Brown et al., 2020). In-N-Out enriches this paradigm by proposing that some features of the collected data can be used as input and output simultaneously, which results in robustness to arbitrary distribution shifts.
Leveraging metadata and unused features in applications. Many applications have inputs indexed by metadata such as location coordinates or timestamps (Christie et al., 2018; Yeh et al., 2020; Ni et al., 2019). We can use such metadata to join (in a database sense) other auxilary data sources on this metadata for use in In-N-Out. This auxiliary information may often be overlooked or discarded, but In-N-Out provides a way to incorporate them to improve both in- and out-of-distribution accuracy.
Division between input features and auxiliary information. While a standard division between inputs and auxiliary information may exist in some domains, In-N-Out applies for any division of the input. An important further question is how to automatically choose this division under distribution shifts.
8 CONCLUSION
We show that while auxiliary information as inputs improve in-distribution and OOD on standard curated datasets, they can hurt OOD in real-world datasets. In contrast, we show that using auxiliary information as outputs by pretraining improves OOD performance. In-N-Out combines the strengths of auxiliary inputs and outputs for further improvements both in- and out-of-distribution.
9 ACKNOWLEDGEMENTS
We thank Sherrie Wang and Andreas Schlueter for their help in procuring remote sensing data, Daniel Levy for his insight in simplifying the proof of Theorem 1, Albert Gu for a key insight in proving Lemma 3 using tools from random matrix theory, as well as Shyamal Buch, Pang Wei Koh, Shiori Sagawa, and anonymous reviewers for their valuable help and comments. This work was supported by an Open Philanthropy Project Award, an NSF Frontier Award as part of the Center for Trustworthy Machine Learning (CTML). SMX was supported by an NDSEG Fellowship. AK was supported by a Stanford Graduate Fellowship. TM was partially supported by the Google Faculty Award, JD.com, Stanford Data Science Initiative, and the Stanford Artificial Intelligence Laboratory.
10 REPRODUCIBILITY
All code, data, and experiments are on CodaLab at this link. | 1. What is the main contribution of the paper regarding improving out-of-distribution model performance?
2. What are the strengths of the proposed method, particularly in its ability to generate pseudolabels and fine-tune pretrained models?
3. What are the weaknesses of the paper, especially in terms of experimental results and the difference between theoretical and experimental models?
4. Do you have any questions about the generality of the theoretical model and its applicability beyond the specific settings analyzed?
5. How does the reviewer assess the clarity and notation usage in the paper, particularly in sections 2.1 and 4? | Review | Review
This paper introduces a new method for leveraging auxiliary information and unlabelled data to improve out-of-distribution model performance. Theoretically, in a linear model with latent variables, they demonstrate using auxiliary data as inputs helps in-distribution test-error, but can hurt out-of-distribution error, while using auxiliary data to pretrain a "good" representation always improve out-of-distribution error. The proposed method uses the auxiliary data to learn an initial model, which generates psuedolabels to fine-tune the pretrained model.
Pros:
At a high level, this paper address a question of great interest to the ML community: out-of-distribution generalization.
The theoretical model shows, albeit in a potentially simple linear setting, that pretraining a low-dimensional shared representation generically improves out-of-distribution accuracy. I'm not intimately familiar with all of the papers in this area, but I think this emphasis (as opposed to transfer learning) is new. This may be of interest more broadly.
Through experiments and a concrete example, the paper demonstrates the potential danger of using auxiliary features as input to models evaluated out of distribution.
The paper reports experimental numbers on two real remote sensing datasets rather than solely evaluating on synthetic data.
Cons:
Experimental results: My primary complaint with the paper is that, for the tasks considered, In-N-Out does not appear to work much better than the pretraining aux-outputs baseline. For out-of-distribution accuracy, across all of the datasets, the effect sizes are very small and the confidence intervals overlap. For in-distribution accuracy, there's only a large difference for the Landcover dataset. This makes me uncertain about the generality of the method and the potential size of the effects, though it's possible there's a more nuanced story that I'm missing.
Clarity: I found the description of the method confusing after several reads (section 2.1 and section 4), and the model sections are very notation heavy without necessarily providing much clarity. The graphical model, however, was very enlightening.
Question:
Difference between the theoretical and experimental In-N-Out models: How come the experimental procedure differs from the one that is analyzed, e.g. fine-tuning on h_out(x)? Is the performance in practice worse? More difficult to implement? I don't mind the gap, but some explanation and, if available, associated experiments explaining this would be enlightening.
Generality of the theoretical model: How universal are the phenomenon captured by the linear model presented in Theorems 1 and 2? It's not obvious if the conclusions are "representative" or generalize beyond the ones explicitly analyzed.
============== Update after rebuttal:
Thank you for clarifying that aux-outputs is itself a contribution and not simply a baseline for comparison. I also appreciated the additional experiment showing examples where In-N-Out can outperform aux-outputs. I'm raising my score from a 6 to a 7 accordingly. |
ICLR | Title
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
Abstract
Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both inand out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best leverage this auxiliary information for the prediction task? Empirically across three image and time-series datasets, and theoretically in a multi-task linear regression setting, we show that (i) using auxiliary information as input features improves in-distribution error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error.
1 INTRODUCTION
When models are tested on distributions that are different from the training distribution, they typically suffer large drops in performance (Blitzer and Pereira, 2007; Szegedy et al., 2014; Jia and Liang, 2017; AlBadawy et al., 2018; Hendrycks et al., 2019a). For example, in remote sensing, central tasks include predicting poverty, crop type, and land cover from satellite imagery for downstream humanitarian, policy, and environmental applications (Xie et al., 2016; Jean et al., 2016; Wang et al., 2020; Rußwurm et al., 2020). In some developing African countries, labels are scarce due to the lack of economic resources to deploy human workers to conduct expensive surveys (Jean et al., 2016). To make accurate predictions in these countries, we must extrapolate to out-of-distribution (OOD) examples across different geographic terrains and political borders.
We consider a semi-supervised setting with few in-distribution labeled examples and many unlabeled examples from both in- and out-of-distribution (e.g., global satellite imagery). While labels are scarce, auxiliary information is often cheaply available for every input and may provide some signal for the missing labels. Auxiliary information can come from additional data sources (e.g., climate data from other satellites) or derived from the original input (e.g., background or non-visible spectrum image channels). This auxiliary information is often discarded or not leveraged, and how to best use them is unclear. One way is to use them directly as input features (aux-inputs); another is to treat them as prediction outputs for an auxiliary task (aux-outputs) in pre-training. Which approach leads to better in-distribution or OOD performance?
Aux-inputs provide more features to potentially improve in-distribution performance, and one may hope that this also improves OOD performance. Indeed, previous results on standard datasets show that improvements in in-distribution accuracy correlate with improvements in OOD accuracy (Recht et al., 2019; Taori et al., 2020; Xie et al., 2020; Santurkar et al., 2020). However, in this paper we find that aux-inputs can introduce more spurious correlations with the labels: as a result, while aux-inputs often improve in-distribution accuracy, they can worsen OOD accuracy. We give examples of this trend on CelebA (Liu et al., 2015) and real-world satellite datasets in Sections 5.2 and 5.3.
Conversely, aux-output methods such as pre-training may improve OOD performance through auxiliary supervision (Caruana, 1997; Weiss et al., 2016; Hendrycks et al., 2019a). Hendrycks et al.
∗Equal contribution.
𝑥
𝑧
𝑤
𝑦
𝑢
𝐵∗
𝐴∗ 𝐶∗ 𝜃" 𝜃#
Figure 2: Graphical model for our theoretical setting: prediction task with input x, target y, and auxiliary information z, which is related to y through the latent variable w and latent noise u.
(2019a) show that pre-training on ImageNet can improve adversarial robustness, and Hendrycks et al. (2019b) show that auxiliary self-supervision tasks can improve robustness to synthetic corruptions. In this paper, we find that while aux-outputs improve OOD accuracy, the in-distribution accuracy is worse than with aux-inputs. Thus, we elucidate a tradeoff between in- and out-of-distribution accuracy that occurs when using auxiliary information as inputs or outputs.
To theoretically study how to best use auxiliary information, we extend the multi-task linear regression setting (Du et al., 2020; Tripuraneni et al., 2020) to allow for distribution shifts. We show that auxiliary information helps in-distribution error by providing useful features for predicting the target, but the relationship between the aux-inputs and the target can shift significantly OOD, worsening the OOD error. In contrast, the aux-outputs model first pre-trains on unlabeled data to learn a lower-dimensional representation and then solves the target task in the lower-dimensional space. We prove that the aux-outputs model improves robustness to arbitrary covariate shift compared to not using auxiliary information.
Can we do better than using auxiliary information as inputs or outputs alone? We answer affirmatively by proposing the In-N-Out algorithm to combine the benefits of auxiliary inputs and outputs (Figure 1). In-N-Out first uses an aux-inputs model, which has good in-distribution accuracy, to pseudolabel in-distribution unlabeled data. It then pre-trains a model using aux-outputs and finally fine-tunes this model on the larger training set consisting of labeled and pseudolabeled data. We prove that In-N-Out, which combines self-training and pre-training, further improves both in-distribution and OOD error over the aux-outputs model.
We show empirical results on CelebA and two remote sensing tasks (land cover and cropland prediction) that parallel the theory. On all datasets, In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over aux-inputs or aux-outputs alone and improves 1–2% in-distribution, 2–3% OOD over not using auxiliary information on remote sensing tasks. Ablations of In-N-Out show that In-N-Out achieves similar improvements over pre-training or self-training alone (up to 5% in-distribution, 1–2% OOD on remote sensing tasks). We also find that using OOD (rather than in-distribution) unlabeled examples for pre-training is crucial for OOD improvements.
2 SETUP
Let x∈Rd be the input (e.g., a satellite image), y ∈R be the target (e.g., crop type), and z ∈RT be the cheaply obtained auxiliary information either from additional sources (e.g., climate information) or derived from the original data (e.g., background).
Training data. Let Pid and Pood denote the underlying distribution of (x,y,z) triples in-distribution and out-of-distribution, respectively. The training data consists of (i) in-distribution labeled data {(xi, yi, zi)}ni=1 ∼ Pid, (ii) in-distribution unlabeled data {(xidi , zidi )} mid i=1 ∼ Pid, and (iii) out-of-distribution unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood.
Goal and risk metrics. Our goal is to learn a model from input and auxiliary information to the target, f :Rd×RT →R. For a loss function `, the in-distribution population risk of the model f is Rid(f)=Ex,y,z∼Pid [`(f(x,z),y)], and its OOD population risk isRood(f)=Ex,y,z∼Pood [`(f(x,z),y)].
2.1 MODELS
We consider three common ways to use the auxiliary information (z) to learn a model.
Baseline. The baseline minimizes the empirical risk on labeled data while ignoring the auxiliary information (accomplished by setting z to 0):
f̂bs =argmin f
1
n n∑ i=1 `(f(xi,0),yi). (1)
Aux-inputs. The aux-inputs model minimizes the empirical risk on labeled data while using the auxiliary information as features:
f̂in =argmin f
1
n n∑ i=1 `(f(xi,zi),yi). (2)
Aux-outputs. The aux-outputs model leverages the auxiliary information z by using it as the prediction target of an auxiliary task, in hopes that there is a low-dimensional feature representation that is common to predicting both z and y. Training the aux-outputs model consists of two steps:
In the pre-training step, we use all the unlabeled data to learn a shared feature representation. Let h :Rd→Rk denote a feature map and gz-out :Rk→RT denote a mapping from feature representation to the auxiliary outputs. Let `aux denote the loss function for the auxiliary information. We define the empirical risk of h and gz-out as:
R̂pre(h,gz-out)= 1
mid+mood (mid∑ i=1 `aux(gz-out(h(x id i )),z id i )+ mood∑ i=1 `aux(gz-out(h(x ood i )),z ood i ) ) . (3)
The estimate of the feature map is ĥout =argminhmingz-outR̂pre(h,gz-out).
In the transfer step, the model uses the pre-trained feature map ĥout and the labeled data to learn the mapping gy-out :Rk→R from feature representation to target y. We define the transfer empirical risk as:
R̂trans(ĥout,gy-out)= 1
n n∑ i=1 `(gy-out(ĥout(xi)),yi) (4)
The estimate of the target mapping is ĝy-out = argmingy-out R̂trans(ĥout,gy-out). The final aux-outputs model is
f̂out(x,z)= ĝy-out(ĥout(x)). (5)
Like the baseline model, the aux-outputs model ignores the auxiliary information for prediction.
3 THEORETICAL ANALYSIS OF AUX-INPUTS AND AUX-OUTPUTS MODELS
We now analyze the baseline, aux-inputs, and aux-outputs models introduced in Section 2. Our setup extends a linear regression setting commonly used for analyzing multi-task problems (Du et al., 2020; Tripuraneni et al., 2020).
Setup. See Figure 2 for the graphical model. Letw=B?x∈Rk be a low-dimensional latent feature (k≤d) shared between auxiliary information z and the target y. Let u∈Rm denote unobserved latent variables not captured in x. We assume z and y are linear functions of u andw:
y=θ>ww+θ > u u+ , (6) z=A?w+C?u, (7)
where ∼ P denotes noise with mean 0 and variance σ2. As in Du et al. (2020), we assume the dimension of the auxiliary information T is greater than the feature dimension k, that is T ≥k, and thatA?,B? andC? have full rank (rank k). We also assume T ≥m, wherem is the dimension of u. Data. Let Px and Pu denote the distribution of x and u in-distribution (ID), and let P ′x, P ′u denote the distribution x and uOOD. We assume x and u are independent, have distributions with bounded density everywhere, and have invertible covariance matrices. We assume the mean of u is zero in-
and out-of-distribution1. We assume we have n≥m+d in-distribution labeled training examples and unlimited access to unlabeled data both ID and OOD, a common assumption in unsupervised domain adaptation theory (Sugiyama et al., 2007; Kumar et al., 2020; Raghunathan et al., 2020).
Loss metrics. We use the squared loss for the target and auxiliary losses: `(ŷ,y) = (y− ŷ)2 and `aux(z,z ′)=‖z−z′‖22.
Models. We assume all model families (f , h, gz-out, gy-out) in Section 2 are linear.
Let S=(A?,B?,C?,θw,θu,Px,Pu) denote a problem setting which satisfies all the above assumptions.
3.1 AUXILIARY INPUTS HELP IN-DISTRIBUTION, BUT CAN HURT OOD
We first show that the aux-inputs model (2) performs better than the baseline model (1) in-distribution. Intuitively, the target y depends on both the inputs x (throughw) and latent variable u (Figure 2). The baseline model only uses x to predict y; thus it cannot capture the variation in y due to u. On the other hand, the aux-inputs model uses x and z to predict y. Since z is a function of x (through w) and u, u can be recovered from x and z by inverting this relation. Note that u is unobserved but implicitly recovered. The aux-inputs model can then combine u and x to predict y better.
Let σ2u=Eu∼Pu [(θ>u u)2] denote the (in-distribution) variance of y due to the latent variables u. The following proposition shows that if σ2u>0 then with enough training examples the aux-inputs model has lower in-distribution population risk than the baseline model.2
Proposition 1. For all problem settings S, P , assuming regularity conditions (bounded x, u, sub-Gaussian noise , and T =m), and σ2u>0, for all δ>0, there existsN such that for n≥N number of training points, with probability at least 1−δ over the training examples, the aux-inputs model improves over the baseline:
Rid(f̂in)<Rid(f̂bs). (8)
Although using z as input leads to better in-distribution performance, we show that the aux-inputs model can perform worse than the baseline model OOD for any number of training examples. Intuitively, the aux-inputs model uses z, which can be unreliable OOD because z depends on u and u can shift OOD. In more detail, the aux-inputs model learns to predict ŷ= θ̂>x,inx+θ̂ > z,inz, where the true output y=θ>x x+θ > z z, and θ̂z,in is an approximation to the true parameter θz , that has some error. Out-of-distribution u and hence z can have very high variance, which would magnify (θ̂z,in−θz)>z and lead to bad predictions.
Example 1. There exists a problem setting S , P , such that for every n, there is some test distribution P ′x,P ′ u with:
E[Rood(f̂in)]>E[Rood(f̂bs)] (9)
3.2 PRE-TRAINING IMPROVES RISK UNDER ARBITRARY COVARIATE SHIFT
While using z as inputs (aux-inputs) can worsen performance relative to the baseline, our first main result is that the aux-outputs model (which pre-trains to predict z from x, and then transfers the learned representation to predict y from x) outperforms the baseline model for all test distributions.
Intuition. Referring to Figure 2, we see that the mapping from inputs x to auxiliary z passes through the lower dimensional features w. In the pre-training step, the aux-outputs model predicts z from x using a low rank linear model, and we show that this recovers the ‘bottleneck’ features w (up to symmetries; more formally we recover the rowspace of B?). In the transfer step, the aux-outputs model learns a linear map from the lower-dimensionalw to y, while the baseline predicts y directly from x. To warm up, without distribution shift, the expected excess risk only depends on the dimension of the input, and not the conditioning. That is, the expected excess risk in linear regression is exactly dσ2/n, where d is the input dimension, so the aux-outputs trivially improves over the baseline since dim(w)<dim(x). In contrast, the worst case risk under distribution shift depends on the conditioning of the data, which could be worse for w than x. Our proof shows that the worst case risk (over all x and u) is still better for the aux-outputs model because projecting to the low-dimensional feature representation “zeroes-out” some error directions.
1This is not limiting because bias in z can be folded into x. 2Since z is typically low-dimensional and x is high-dimensional (e.g., images), the aux-inputs model needs
only a slightly larger number of examples before it outperforms the baseline.
Algorithm 1 In-N-Out Require: in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid,
in-distribution unlabeled data {(xidi ,zidi )} mid i=1∼Pid, OOD unlabeled data {(xoodi ,zoodi )} mood i=1 ∼Pood
1: Learn f̂in : (x,z) 7→y from in-distribution labeled data {(xi,yi,zi)}ni=1∼Pid 2: Pre-train gz-out◦ĥout :x 7→z on aux-outputs from all unlabeled data {(xidi ,zidi )} mid i=1∪{(x ood i ,z ood i )} mood i=1 3: Return f̂= ĝ◦ĥout :x 7→y trained on labeled and pseudolabeled data {(xi,yi)}ni=1∪{(xidi ,f̂in(xidi ,zidi )} mid i=1
Theorem 1. For all problem settings S , noise distributionsP , test distributionsP ′x ,P ′u, and n≥m+d number of training points:
E[Rood(f̂out)]≤E[Rood(f̂bs)]. (10)
See Appendix A for the proof.
4 IN-N-OUT: COMBINING AUXILIARY INPUTS AND OUTPUTS
We propose the In-N-Out algorithm, which combines both the aux-inputs and aux-outputs models for further complementary gains (Figure 1). As a reminder: (i) The aux-inputs model (x,z→y) is good in-distribution, but bad OOD because z can be misleading OOD. (ii) The aux-outputs model (x→y) is better than the baseline OOD, but worse than aux-inputs in-distribution because it doesn’t use z. (iii) We propose the In-N-Out model (x→y), which uses pseudolabels from aux-inputs (stronger model) in-distribution to transfer in-distribution accuracy to the aux-outputs model. The In-N-Out model does not use z to make predictions since z can be misleading / spurious OOD.
In more detail, we use the aux-inputs model (which is good in-distribution) to pseudolabel in-distribution unlabeled data. The pseudolabeled data provides more effective training samples (self-training) to fine-tune an aux-outputs model pre-trained on predicting auxiliary information from all unlabeled data. We present the general In-N-Out algorithm in Algorithm 1 and analyze it in the linear multi-task regression setting of Section 2. The In-N-Out model f̂ = ĝ ◦ ĥout optimizes the empirical risk on labeled and pseudolabeled data:
ĝ=argmin g
(1−λ)R̂trans(ĥout,g)+λR̂st(ĥout,f̂in,g) (11)
where R̂st(ĥout,f̂in,g)= 1m1 ∑m1 i=1`(g(ĥout(x id i )),f̂in(x id i ,z id i )) is the loss of self-training on pseudolabels from the aux-inputs model, and λ ∈ [0,1] is a hyperparameter that trades off between labeled and pseudolabeled losses. In our experiments, we fine-tune ĝ and ĥout together.
Theoretical setup. Because fine-tuning is difficult to analyze theoretically, we analyze a slightly modified version of In-N-Out where we train an aux-inputs model to predict y given the features ĥout(x) and auxiliary information z, so the aux-inputs model ĝin : Rk × RT → R is given by ĝin = argming 1 n ∑n i=1`(g(ĥout(xi),zi),yi). The population self-training loss on pseudolabels from the aux-inputs model ĝin ◦ ĥout is: Rst(ĥout,ĝin,g) = Ex,z∼Pid [`(g(ĥout(x)),ĝin(ĥout(x),z))], and we minimize the self-training loss: ĝ=argmingRst(ĥout,ĝin,g). At test time given input x,z the In-N-Out model predicts ĝ(ĥout(x)). For the theory, we assume all models (ĝin,ĝ,andĥout) are linear.
4.1 IN-N-OUT IMPROVES OVER PRE-TRAINING UNDER ARBITRARY COVARIATE SHIFT
We prove that In-N-Out helps on top of pre-training, as long as the auxiliary features give us information about y relative to the noise in-distribution—that is, if σ2u is much larger than σ 2.
To build intuition, first consider the special case where the noise σ2 = 0 (equivalently, = 0). Since u can be recovered fromw and z, we can write y as a linear function ofw and z: y=γ>ww+γ > z z. We train an aux-inputs model ĝin fromw,z to y on finite labeled data. Since there is no noise, ĝin predicts y perfectly from w,z (we learn γw and γz). We use ĝin to pseudolabel a large amount of unlabeled data, and since ĝin predicts y perfectly fromw,z, the pseudolabels are perfect. So here pseudolabeling gives us a much larger and correctly labeled dataset to train the In-N-Out model on.
The technical challenge is proving that self-training helps under arbitrary covariate shift even when the noise is non-zero (σ2 > 0), so the aux-inputs model ĝin that we learn is accurate but not perfect.
In this case, the pseudolabels have an error which propagates to the In-N-Out model self-trained on these pseudolabels, but we want to show that the error is lower than for the aux-outputs model. The error in linear regression is proportional to the noise of the target y, which for the aux-outputs model is σ2 +σ2u. We show that the In-N-Out model uses the aux-inputs model to reduce the dependence on the noise σ2u, because the aux-inputs model uses both w and z to predict y. The proof reduces to showing that the max singular value for the In-N-Out error matrix is less than the min-singular value of the aux-outputs error matrix with high probability. A core part of the argument is to lower bound the min-singular value of a random matrix (Lemma 3). This uses techniques from random matrix theory (see e.g., Chapter 2.7 in Tao (2012)); the high level idea is to show that with probability 1−δ each column of the random matrix has a (not too small) component orthogonal to all other columns.
Theorem 2. In the linear setting, for all problem settings S with σ2u > 0, test distributions P ′x,P ′u, n≥m+d number of training points, and δ>0, there exists a,b>0 such that for all noise distributions P , with probability at least 1−δ over the training examples and test example x′∼P ′x, the ratio of the excess risks (for all σ2 small enough that a−bσ2>0) is:
Roodin-out−R∗
Roodout −R∗ ≤ σ
2
a−bσ2 (12)
Here R∗ = ming∗,h∗Ex′,y′,z′∼P ′ [`(g∗(h∗(x′)),y′)] is the min. possible (Bayes-optimal) OOD risk, Roodin-out = Ey′∼P ′y′|x′ [`(ĝ(ĥout(x ′)), y′)] is the risk of the In-N-Out model on test example x′, and Roodout =Ey′∼P ′y′|x′ [`(ĝy-out(ĥout(x ′)),y′)] is the risk of the aux-outputs model on test example x′. Note thatRoodin-out andR ood out are random variables that depend on the test input x ′ and the training setX .
Remark 1. As σ→ 0, the excess risk ratio of In-N-Out to Aux-outputs goes to 0, so the In-N-Out estimator is much better than the aux-outputs estimator.
The proof of the result is in Appendix A.
5 EXPERIMENTS
We show on real-world datasets for land cover and cropland prediction that aux-inputs can hurt OOD performance, while aux-outputs improve OOD performance. In-N-Out improves OOD accuracy and has competitive or better in-distribution accuracy over other models on all datasets (Section 5.2). Secondly, we show that the tradeoff between in-distribution and OOD performance depends on the choice of auxiliary information on CelebA and cropland prediction (Section 5.3). Finally, we show that OOD unlabeled examples are important for improving OOD robustness (Section 5.4).
5.1 EXPERIMENTAL SETUP
We give a summary of considered datasets and setup here — see Figure 3 and Appendix B for details. Our datasets use auxiliary information both derived from the input (CelebA, Cropland) and from other sources (Landcover).
CelebA. In CelebA (Liu et al., 2015), the input x is a RGB image (resized to 64×64), the target y is a binary label for gender, and the auxiliary information z are 7 (of 40) binary-valued attributes derived from the input (e.g., presence of makeup, beard). We designate the set of images where the celebrity is wearing a hat as OOD. We use a ResNet18 as the backbone model architecture for all models (see Appendix B.1 for details).
Cropland. Crop type or cropland prediction is an important intermediate problem for crop yield prediction (Cai et al., 2018; Johnson et al., 2016; Kussul et al., 2017). The input x is a 50× 50 RGB image taken by a satellite, the target y is a binary label that is 1 when the image contains majority cropland, and the auxiliary information z is the center location coordinate plus 50× 50 vegetation-related bands. The vegetation bands in the auxiliary information z is derived from the original satellite image, which contains both RGB and other frequency bands. We use the Cropland dataset from Wang et al. (2020), with data from the US Midwest. We designate Iowa, Missouri, and Illinois as in-distribution and Indiana and Kentucky as OOD. Following Wang et al. (2020), we use a U-Net-based model (Ronneberger et al., 2015). See Appendix B.2 for details.
Landcover. Land cover prediction involves classifying the land cover type (e.g., “grasslands”) from satellite data at a location (Gislason et al., 2006; Rußwurm et al., 2020)). The input x is a time series measured by NASA’s MODIS satellite (Vermote, 2015), the target y is one of 6 land cover classes, and the auxiliary information z is climate data (e.g., temperature) from ERA5, a dataset computed from various satellites and weather station data (C3S, 2017). We designate non-African locations as in-distribution and Africa as OOD. We use a 1D-CNN to handle the temporal structure in the MODIS data. See Appendix B.3 for details.
Data splits. We first split off the OOD data, then split the rest into training, validation, and in-distribution test (see Appendix B for details). We use a portion of the training set and OOD set as in-distribution and OOD unlabeled data respectively. The rest of the OOD set is held out as test data. We run 5 trials, where we randomly re-generate the training/unlabeled split for each trial (keeping held-out splits fixed). We use a reduced number of labeled examples from each dataset (1%, 5%, 10% of labeled examples for CelebA, Cropland, and Landcover respectively), with the rest as unlabeled.
Repeated self-training. In our experiments, we also consider augmenting In-N-Out models with repeated self-training, which has fueled recent improvements in both domain adaptation and ImageNet classification (Shu et al., 2018; Xie et al., 2020). For one additional round of repeated self-training, we use the In-N-Out model to pseudolabel all unlabeled data (both ID and OOD) and also initialize the weights with the In-N-Out model. Each method is trained with early-stopping and hyperparameters are chosen using the validation set.
5.2 MAIN RESULTS
Table 1 compares the in-distribution (ID) and OOD accuracy of different methods. In all datasets, pretraining with aux-outputs improves OOD performance over the baseline, and In-N-Out (with or without repeated ST) generally improves both in- and out-of-distribution performance over all other models.
CelebA. In CelebA, using auxiliary information either as aux-inputs or outputs improves both ID (2–4%) and OOD accuracy (5%). We hypothesize this is because the auxiliary information is quite robust. Figure 4 shows that there is a significant correlation (r=0.72) between ID and OOD accuracy for 100 different sets of aux-inputs, supporting results on standard datasets (Recht et al., 2019; Xie et al., 2020; Santurkar et al., 2020). In-N-Out achieves the best OOD performance and comparable ID performance even though there is no tradeoff between ID and OOD accuracy.
Remote sensing. In the remote sensing datasets, aux-inputs can induce a tradeoff where increasing ID accuracy hurts OOD performance. In cropland prediction, even with a small geographic shift (US Midwest), the baseline model has a significant drop from ID to OOD accuracy (4%). The aux-inputs model improves ID accuracy almost 1% above the baseline but OOD accuracy drops 6%. In land cover prediction, using climate information as aux-inputs decreases OOD accuracy by over 4% compared to the baseline. The aux-outputs model improves OOD, but decreases ID accuracy by 3% over the baseline.
90.0 90.5 91.0 91.5 92.0 92.5 In-distribution accuracy
74
75
76
77
78
OO D
ac cu
ra cy
Figure 5: In-distribution vs. OOD accuracy on CelebA when sequentially adding a random set of 15 auxiliary inputs one-by-one. Even if adding all 15 auxiliary inputs improves both in-distribution and OOD accuracy, some intermediate in-distribution gains can hurt OOD.
ID Test Acc OOD Test Acc
Only in-distribution 69.73± 0.51 57.73± 1.58 Only OOD 69.92± 0.41 59.28± 1.01 Both 70.07± 0.46 59.84± 0.98
Table 2: Ablation study on the use of indistribution vs. OOD unlabeled data in pre-training models on Landcover, where unlabeled sample size is standardized (much smaller than Table 1). Using OOD unlabeled examples are important for gains in OOD accuracy (%). Results are shown with 90% error intervals over 5 trials.
Improving in-distribution accuracy over aux-outputs. One of the main goals of the self-training step in In-N-Out is to improve the in-distribution performance of the aux-outputs model. We compare to oracle models that use a large amount of in-distribution labeled data to compare the gains from In-N-Out. In Landcover, the oracle model which uses 160k labeled ID examples gets 80.5% accuracy. In-N-Out uses 16k labeled examples and 150k unlabeled ID examples (with 50k unlabeled OOD examples) and improves the ID accuracy of aux-output from 72.5% to 77.4%, closing most (62%) of the gap. In Cropland, the oracle model achieves 95.6% accuracy. Here, In-N-Out closes 80% of the gap between aux-outputs and the oracle, improving ID accuracy from 95.1% to 95.5%.
Ablations with only pre-training or self-training. We analyze the individual contributions of selftraining and pre-training in In-N-Out. On both cropland and land cover prediction, In-N-Out outperforms standard self-training on pseudolabels from the aux-inputs model (In-N-Out without pre-training), especially on OOD performance, where In-N-Out improves by about 1% and 2% respectively. Similarly, In-N-Out improves upon pre-training (aux-outputs model) both ID and OOD for both datasets.
5.3 CHOICE OF AUXILIARY INPUTS MATTERS
We find that the choice of auxiliary inputs affects the tradeoff between ID and OOD performance significantly, and thus is important to consider for problems with distribution shift. While Figure 4 shows that auxiliary inputs tend to simultaneously improve ID and OOD accuracy in CelebA, our theory suggests that in the worst case, there should be auxiliary inputs that worsen OOD accuracy. Indeed, Figure 5 shows that when taking a random set of 15 auxiliary inputs and adding them sequentially as auxiliary inputs, there are instances where an extra auxiliary input improves in-distribution but hurts OOD accuracy even if adding all 15 auxiliary inputs improves both ID and OOD accuracy. In cropland prediction, we compare using location coordinates and vegetation data as auxiliary inputs with only using vegetation data. The model with locations achieves the best ID performance, improving almost 1% in-distribution over the baseline with only RGB. Without locations (only vegetation data), the ID accuracy is similar to the baseline but the OOD accuracy improves by 1.5%. In this problem, location coordinates help with in-distribution interpolation, but the model fails to extrapolate to new locations.
5.4 OOD UNLABELED DATA IS IMPORTANT FOR PRE-TRAINING
We compare the role of in-distribution vs. OOD unlabeled data in pre-training. Table 2 shows the results of using only in-distribution vs. only OOD vs. a balanced mix of unlabeled examples for pre-training on the Landcover dataset, where unlabeled sample size is standardized across the models (by reducing to the size of the smallest set, resulting in 4x less unlabeled data). Using only in-distribution unlabeled examples does not improve OOD accuracy, while having only OOD unlabeled examples does well both in-distribution and OOD since it also has access to the labeled in-distribution data. For the same experiment in cropland prediction, the differences were not statistically significant, perhaps due to the smaller geographic shift (across states in cropland vs. continents in landcover).
6 RELATED WORK
Multi-task learning and weak supervision. Caruana and de Sa (2003) proposed using noisy features (aux-outputs) as a multi-task output, but do not theoretically analyze this approach. Wu et al. (2020) also study multi-task linear regression. However, their auxiliary tasks must have true parameters that are closely aligned (small cosine distance) to the target task. Similarly, weak supervision works assume access to weak labels correlated with the true label (Ratner et al., 2016; 2017). In our paper,
we make no assumptions about the alignment of the auxiliary and target tasks beyond a shared latent variable while also considering distribution shifts.
Transfer learning, pre-training, and self-supervision. We support empirical works that show the success of transfer learning and pre-training in vision and NLP (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; Devlin et al., 2019). Theoretically, Du et al. (2020); Tripuraneni et al. (2020) study pre-training in a similar linear regression setup. They show in-distribution generalization bound improvements, but do not consider OOD robustness or combining with auxiliary inputs. Hendrycks et al. (2019b) shows empirically that self-supervision can improve robustness to synthetic corruptions. We support these results by showing theoretical and empirical robustness benefits for pre-training on auxiliary information, which can be derived from the original input as in self-supervision.
Self-training for robustness. Raghunathan et al. (2020) analyze robust self-training (RST) (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019), which improves the tradeoff between standard and adversarially robust accuracy, in min-norm linear regression. Khani and Liang (2021) show how to use RST to make a model robust against a predefined spurious feature without losing accuracy. While related, we work in multi-task linear regression, study pre-training, and prove robustness to arbitrary covariate shifts. Kumar et al. (2020) show that repeated self-training on gradually shifting unlabeled data can enable adaptation over time. In-N-Out is complementary and may provide better pseudolabels in each step of this method. Chen et al. (2020) show that self-training can remove spurious features for Gaussian input features in linear models, whereas our results hold for general input distributions (with density). Zoph et al. (2020) show that self-training and pre-training combine for in-distribution gains. We provide theory to support this and also show benefits for OOD robustness.
Domain adaptation. Domain adaptation works account for covariate shift by using unlabeled data from a target domain to adapt the model (Blitzer and Pereira, 2007; Daumé III, 2007; Shu et al., 2018; Hoffman et al., 2018; Ganin et al., 2016). Often, modern domain adaptation methods (Shu et al., 2018; Hoffman et al., 2018) have a self-training or entropy minimization component that benefits from having a better model in the target domain to begin with. Similarly, domain adversarial methods (Ganin et al., 2016) rely on the inductive bias of the source-only model to correctly align the source and target distributions. In-N-Out may provide a better starting point for these domain adaptation methods.
7 DISCUSSION
Using spurious features for robustness. Counterintuitively, In-N-Out uses potentially spurious features (the auxiliary information, which helps in-distribution but hurts OOD accuracy) to improve OOD robustness. This is in contrast to works on removing spurious features from the model (Arjovsky et al., 2019; Ilyas et al., 2019; Chen et al., 2020). In-N-Out promotes utilizing all available information by leveraging spurious features as useful in-distribution prediction signals rather than throwing them away.
General robustness with unlabeled data. In-N-Out is an instantiation of a widely applicable paradigm for robustness: collect unlabeled data in all parts of the input space and learn better representations from the unlabeled data before training on labeled data. This paradigm has driven large progress in few-shot generalization in vision (Hendrycks et al., 2019a;b) and NLP (Devlin et al., 2019; Brown et al., 2020). In-N-Out enriches this paradigm by proposing that some features of the collected data can be used as input and output simultaneously, which results in robustness to arbitrary distribution shifts.
Leveraging metadata and unused features in applications. Many applications have inputs indexed by metadata such as location coordinates or timestamps (Christie et al., 2018; Yeh et al., 2020; Ni et al., 2019). We can use such metadata to join (in a database sense) other auxilary data sources on this metadata for use in In-N-Out. This auxiliary information may often be overlooked or discarded, but In-N-Out provides a way to incorporate them to improve both in- and out-of-distribution accuracy.
Division between input features and auxiliary information. While a standard division between inputs and auxiliary information may exist in some domains, In-N-Out applies for any division of the input. An important further question is how to automatically choose this division under distribution shifts.
8 CONCLUSION
We show that while auxiliary information as inputs improve in-distribution and OOD on standard curated datasets, they can hurt OOD in real-world datasets. In contrast, we show that using auxiliary information as outputs by pretraining improves OOD performance. In-N-Out combines the strengths of auxiliary inputs and outputs for further improvements both in- and out-of-distribution.
9 ACKNOWLEDGEMENTS
We thank Sherrie Wang and Andreas Schlueter for their help in procuring remote sensing data, Daniel Levy for his insight in simplifying the proof of Theorem 1, Albert Gu for a key insight in proving Lemma 3 using tools from random matrix theory, as well as Shyamal Buch, Pang Wei Koh, Shiori Sagawa, and anonymous reviewers for their valuable help and comments. This work was supported by an Open Philanthropy Project Award, an NSF Frontier Award as part of the Center for Trustworthy Machine Learning (CTML). SMX was supported by an NDSEG Fellowship. AK was supported by a Stanford Graduate Fellowship. TM was partially supported by the Google Faculty Award, JD.com, Stanford Data Science Initiative, and the Stanford Artificial Intelligence Laboratory.
10 REPRODUCIBILITY
All code, data, and experiments are on CodaLab at this link. | 1. What is the main contribution of the paper regarding combining two approaches using auxiliary information for out-of-distribution samples?
2. What are the strengths and weaknesses of the proposed method, particularly in its theoretical analysis and empirical validations?
3. Are there any concerns or ambiguities regarding the description of the problem setting, approach, and intuition?
4. How does the reviewer assess the originality and novelty of the proposed method compared to prior works on unsupervised domain adaptation?
5. What are some minor comments and corrections regarding the proof of Proposition 1, the explanation of the aux-outputs model, and the notation used in the equations? | Review | Review
Theoretical and empirical study on how to combine two approaches using auxiliary information for out-of-distribution samples
Quality:
Pros. The authors propose the method to combine two representative methods (Aux-Inputs and Aux-Outputs) to exploit auxiliary information which is usually available in real-world scenarios. Beginning with theoretical analysis on Aux-Inputs and Aux-Outputs models, they show that the proposed method, In-N-Out is effective in minimizing the risk under a distributional change in a linear regression setting. Empirical validations consistently show the preference of the proposed method highlighting the improvement in out-of-distribution (OOD) samples.
Clarity:
Pros. The description of the problem setting, approach, and intuition is well-written and persuasive based on their observations in the example in Introduction. The intuitions for the theoretical findings are nicely addressed. Cons. Some missing definitions (e.g., z in Alg. 1 as the output of \hat{h}_{out}) and lacking rigorousness in the text make difficulty in following (e.g., the function \hat{f}_{in} takes two inputs, x and z, AND takes one input, x^{id}_i, in the 2nd and 4th lines in Alg. 1). Figure 1 has ambiguities to understand which portion of the model is transferred and how to handle the change of the number of inputs (purely presumably, zero-filling as in the baseline in Sec 2.)
Originality:
Cons. Aside from their theoretical analysis, the combination of Aux-Inputs and Aux-Outputs is a simple model exploiting pseudo-labels (Xie et al., 2018), used for unsupervised domain adaptation tasks. In terms of novelty, the proposed method has a weak contribution. Isn't possible to borrow a sophisticated model from the works on unsupervised domain adaptation in your experiments?
What expected in rebuttal:
(1) Please explain the applicability of a sophisticated model from the literature in unsupervised domain adaptation in your experiments. (As an extension of the related work section.)
(2) In Sec 3., they described, "the aux-outputs model has better risk since w is lower dimensional than x. In particular, the in-domain risk only depends on the dimension but not on the conditioning of the data." However, Aux-Inputs or even a baseline (linear) model can have a lower-dimensional hidden representation in a low-rank linear model (e.g., two-layer perceptron, f(x) := W_2 W_1 x). So, the explanation is insufficient for the matter.
(3) The proof of Proposition 1 is incorrect. Where can we find Remark 10? There is a typo missing reference after "We use Theorem 1 in ?." In Eqn. 23 and 24, if 1+cd/n = 2, the left-most term in Eqn. 24 is <= 2 \sigma^2, but not guarantee the Eqn. 24 if \sigma_u^2 < \sigma^2. To satisfy Eqn. 24, cd/n \sigma^2 < \sigma_u^2 should be true.
(4) In Lemma 8, the square is omitted inside of expectation in LHS of Eqn. 87. And, the dimension of R should be k x (k+m), not k x (k+T). I believe T is accidentally misplaced in this context.
Minor comments:
(5) Before Eqn. 26, a missing period right before "For the input model ..."
(6) y' and x' instead of y and x in Eqn. 32. |
ICLR | Title
SCELMo: Source Code Embeddings from Language Models
Abstract
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
1 INTRODUCTION
Learning rich representations for source code is an open problem that has the potential to enable software engineering and development tools. Some work on machine learning for source code has used hand engineered features (Long & Rinard, 2016, e.g.), but designing and implementing such features can be tedious and error-prone. For this reason, other work considers the task of learning a representation of source code from data (Allamanis et al., 2018a). Many models of source code are based on learned representations called embeddings, which transform words into a continuous vector space (Mikolov et al., 2013). Currently in software engineering (SE) researchers have used static embeddings (Harer et al., 2018; White et al., 2019; Pradel & Sen, 2018), which map a word to the same vector regardless of its context. However, recent work in natural language processing (NLP) has found that contextual embeddings can lead to better performance (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019). Contextualized embeddings assign a different vector to a word based on the context it is used. For NLP this has the advantage that it can model phenomena like polysemy. A natural question to ask is if these methods would also be beneficial for learning better SE representations.
In this paper, we introduce a new set of contextual embeddings for source code. Contextual embeddings have several potential modelling advantages that are specifically suited to modelling source code:
• Surrounding names contain important information about an identifier. For example, for a variable name, surrounding tokens might include functions that take that variable as an argument or assignments to the variable. These tokens provide indirect information about possible values the variable could take, and so should affect its representation. Even keywords can have very different meanings based on their context. For instance, a private function is not the same as a private variable or a private class (in the case of Java / C++).
• Contextual embeddings assign a different representation to a variable each time it is used in the program. By doing this, they can potentially capture how a variable’s value evolves through the program execution.
• Contextual embeddings enable the use of transfer learning. Pre-training a large neural language model and querying it for contextualized representations while simultaneously fine-tuning for the specific task is a very effective technique for supervised tasks for which there is a small amount of supervised data available. As a result only a small model needs to be fine-tuned atop the pre-trained model, without the need for task-specific architectures nor the need of training a large model for each task separately.
In this paper, we highlight the potential of contextual code embeddings for program repair. Automatically finding bugs in code is an important open problem in SE. Even simple bugs can be hard to spot and repair. A promising approach to this end is name-based bug detection, introduced by DeepBugs (Pradel & Sen, 2018). The current state-of-the-art in name-based bug detection relies on static representations from Word2Vec (Mikolov et al., 2013) to learn a classifier that distinguishes correct from incorrect code for a specific bug pattern. We introduce a new set of contextualized
embeddings for code and explore its usefulness on the task of name-based bug detection. Our method significantly outperforms DeepBugs as well as other static representations methods on both the DeepBugs dataset as well as a new previously unused test set of JavaScript projects. We release our implementation and representations as they could lead to improvements in a great variety of SE tasks.
2 RELATED WORK
Unsupervised static word embeddings have been extensively used to improve the accuracy of supervised tasks in NLP (Turian et al., 2010). Notable examples of such methods are Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). However, the above models learn only a single context-independent word representation. To overcome this problem some models (Wieting et al., 2016; Bojanowski et al., 2017) enhance the representations with subword information, which can also somewhat deal with out-of-vocabulary words. Another approach is to learn a different representation for every word sense (Neelakantan et al., 2014) but this requires knowing the set of word senses in advance. More recent methods overcome the above issues by learning contextualized embeddings. Melamud et al. (2016) encode the context surrounding a pivot word using a bidirectional LSTM. Peters et al. (2018) use a deep bidirectional LSTM, learning word embeddings as functions of its internal states, calling the method Embeddings using Language Models (ELMo). We discuss ELMo in detail in Section 3. Devlin et al. (2018) introduced bidirectional encoder representations from transformers (BERT). This method learns pre-trained contextual embeddings by jointly conditioning on left and right context via an attention mechanism.
Program repair is an important task in software engineering and programming languages. For a detailed review see Monperrus (2018); Gazzola et al. (2019). Many recent program repair methods are based on machine learning. Yin et al. (2018) learn to represent code edits using a gated graph neural network (GGNN) (Li et al., 2016). Allamanis et al. (2018b) learn to identify a particular class of bugs called variable misuse bugs, using a GGNN. Chen et al. (2019) introduce SequenceR which learns to transform buggy lines into fixed ones via machine translation. Our work is orthogonal to these approaches and can be used as input in other models.
Finally, our work is also related to code representation methods many of which have also been used in program repair. Harer et al. (2018) learn Word2Vec embeddings for C/C++ tokens to predict software vulnerabilities. White et al. (2019) learn Word2Vec embeddings for Java tokens and utilize them in program repair. Alon et al. (2019) learn code embeddings using abstract syntax tree paths. A more detailed overview can be found in (Allamanis et al., 2018a; Chen & Monperrus, 2019).
3 EMBEDDINGS FROM LANGUAGE MODELS (ELMO)
ELMo (Peters et al., 2018) computes word embeddings from the hidden states of a language model. Consequently, the embeddings of each token depend on its context of the input sequence, even out-of-vocabulary (OOV) tokens have effective input representations. In this section, we briefly describe the ELMo embeddings.
The first step is that a neural language model is trained to maximize the likelihood of a training corpus. The architecture used by ELMo a bidirectional LSTM with L layers and character convolutions in the input layer. Let the input be a sequence of tokens (t1, ...tN ). For each token tk, denote by xLMk the input representation from the character convolution. Consequently, this representation passes through L layers of forward and backward LSTMs. Then each layer j ∈ {1, ..., L} of the forward LSTM computes a hidden state −−→ hLMk,j , and likewise the hidden states of the backward LSTM are denoted by ←−− hLMk,j . The parameters for the token representation and for the output softmax layer are tied for both directions, while different parameters are learned for each direction of the LSTMs.
After the language model has been trained, we can use it within another downstream task by combining the hidden states of the language model from each LSTM layer. This process is called ELMo. For each token tk of a sentence in the test set, the language model computes 2L + 1 hidden states, one in each direction for each layer, and then the input layer. To make the following more compact, we can write these as hLMk,0 = x LM k for the input layer, and then hLMk,j = [ −−→ hLMk,j , ←−− hLMk,j ] for all of the other layers. The set of these vectors is
Rk = {hLMk,j |j = 0, ..., L}. (1)
To create the final representation that is fed to downstream tasks, ELMo collapses the set of representations into a single vector Ek for token tk. A simplistic approach is to only select the top layer, so that Ek = hLMk,L . A more general one, which we use in this work, is to combine the layers via fine-tuned task specific weights s = (s1 . . . sL) for every
layer. Then we can compute the embedding for token k as
Ek = γ L∑ j=0 sjh LM k,j , (2)
where γ is an additional scalar parameter that scales the entire vector. In our experiments we did not performed finetuning and thus used equal weights sj = 1/(L + 1) for each layer and γ = 1. However, our implementation also supports all the aforementioned ways of collapsing the set of representations.
A potential drawback of the method is that it still utilizes a softmax output layer with a fixed vocabulary that does not scale effectively and it still predicts UNK for OOV tokens which may have a negative effect on the representations.
4 SOURCE CODE ELMO
We describe Source Code ELMo (SCELMo), which trains ELMo on corpora of source code. However, we note that normally ELMo models in other domains are able to effectively utilize much larger representations. The code was tokenized using the esprima JavaScript tokenizer1. For training the ELMo model we used a corpus of 150,000 JavaScript Files (Raychev et al. 2016) consisting of various open-source projects. This corpus has previously been used on several tasks (Raychev et al., 2016; Pradel & Sen, 2018; Bavishi et al., 2018). We applied the patch released by Allamanis et al. (2018a) to filter out code duplication as this phenomenon was shown on this and other corpora to result in inflation of performance metrics. This resulted in 64750 training files and 33229 validation files. Since the validation set contains files from the same projects as the train the contained instances might be too similar and unrealistic overestimating. To address this we also created a test set of 500 random JavaScript projects sampled from the top 20,000 open-source JavaScript projects as of May 2019. The test corpus has not been previously utilized in previous work and is a better reflection of the performance of the learned bug detectors. Lastly, it is important to know what the performance of the method will be if we do not have access to training data from the projects on which we would like to find bugs. This is common in practice for many real case scenarios. For training the ELMo model, we use an embedding size of 100 features for each of the forward and backward LSTMs so that each layer sums up to 200 features.
5 CONTEXTUAL EMBEDDINGS FOR PROGRAM REPAIR
In this section, we describe how contextual embeddings can be incorporated within a recent machine learning-based bug detection system, the DeepBugs system of Pradel & Sen (2018). In the first part of this section, we give background about the DeepBugs system, and then we describe how we incorporate SCELMo within DeepBugs. DeepBugs treats the problem of finding a bug as a classification problem. The system considers a set of specific bug types, which are small mistakes that might be made in a program, such as swapping two arguments. For each bug type, DeepBugs trains a binary classifier that takes a program statement as input and predicts whether the statement contains that type of bug. At test time, this classifier can be run for every statement in the program to attempt to detect bugs.
In order to train the model both examples of correct and incorrect (buggy) code are necessary. DeepBugs treats the existing code as correct and randomly mutates it to obtain buggy code. To obtain training examples, we extract all expressions from the source code which are either the function calls with exactly two arguments and all binary expressions. To create instances of buggy code we mutate each of the correct instances. As such, arguments in function calls are swapped, the binary operator in binary expressions is replaced with another random one, and finally randomly either the left or the right operand is replaced by another random binary operand that appears in the same file. Then the classification task is a binary task to predict whether the instance is correct, i.e., it comes from the original code, or whether it is buggy, i.e. it was one of the randomly mutated examples. The validation and test sets are mutated in the same way as the training set. The split between correct and buggy instances has 50/50 class distribution as for each original code instance exactly one mutated buggy counterpart is created.
The architecture for the classifier is a feedforward network with a single hidden layer of 200 dimensions with Relu activations and a sigmoid output layer. For both the input and hidden layers a dropout of 0.2. The network was trained in all experiments for 10 epochs with a batch size of 50 and the RMSProp optimizer. We note that for maintaining a consistent comparison with DeepBugs we kept all the above parameters as well as the optimizer’s parameters fixed to the values reported in Pradel & Sen (2018). Tuning these parameters would probably result in at least a small performance increase for our method.
1https://esprima.org/
In our experiments, we consider three bug types that address a set of common programming mistakes: swapped arguments of function calls, using the wrong binary operator and using an incorrect binary operand in a binary expression. The methodology can easily be applied to other bug types. Figure 1 illustrates an example of each of the three bug types.
5.1 INPUT TO THE CLASSIFIER
A key question is how a statement from the source code is converted into a feature vector that can be used within the classifier. DeepBugs uses a set of heuristics that, given a statement and a bug type, return a sequence of identifiers from the statement that are most likely to be relevant. For instance, for the call to setTimeout in Listing 1 the following sequence of identifiers would be extracted: [setTimeout, delay, function]. A detailed description of the heuristics is available in Appendix A.
These heuristics result in a sequence of program identifiers. These are converted to continuous vectors using word embeddings, concatenated, and this is the input to the classifier. DeepBugs uses Word2Vec embeddings trained on a corpus of code. In our experiments, we train classifiers using three different types of word embeddings. First, we kept the 10,000 most frequent identifiers/literals and assigned to each of them a random embedding of 200 features. Second, to reproduce the results of Pradel & Sen (2018), we use the CBOW variant of Word2Vec to learn representations consisting of 200 features for the 10,000 most frequent identifiers/literals. Finally, we train a FastText embeddings (Bojanowski et al., 2017) on the training set to learn identifier embeddings that contain subword information. The subwords used by FastText are all the character trigrams that appear in the training corpus. Identifiers are therefore composed of multiple subwords. To represent an identifier, we sum the embeddings of each of its subwords and summing them up. This allows the identifier embeddings to contain information about the structure and morphology of identifiers. This also allows the FastText embeddings, unlike the Word2Vec ones, to represent OOV words as a combination of character trigrams.
Note that DeepBugs can detect bugs only in statements that do not contain OOV (out-of-vocabulary) identifiers, because its Word2Vec embeddings cannot extract features for OOV names. Instead our implementation does not skip such instances. Since the original work discarded any instances that contain OOV identifiers we neither know how the method performs on such instances nor how often those appear in the utilized dataset of DeepBugs. Moreover, DeepBugs supported only a specific subset of AST nodes and skipped the rest. For example if a call’s argument is a complex expression consisting of other expressions then the call would be skipped. However, we expanded the implementation to support all kinds of AST nodes and to not skip instances with nested expressions as discussed in Appendix A. We note that we still skip an instance if one of its main parts (e.g., a function call’s argument) is a complex expression longer than 1,000 characters as such expressions might be overly long to reason about.
5.2 CONNECTING SCELMO TO THE BUG DETECTOR
We investigated two variants of the bug detection model, which query SCELMo in different ways to get features for the classifier. The first utilizes the heuristic of Section A to extract a small set of identifiers or literals that represent the code piece. For example, for an incorrect binary operand instance we extract one identifier or literal for the left and right operands respectively, and we also extract its binary operator. Then, those are concatenated to form a query to the network. In the case of function calls we extract the identifier corresponding to the name of the called function, one identifier or literal for the first and second argument respectively and an identifiers for the expression on which the function is called. We also add the appropriate syntax tokens (a ’.’ if necessary, ’,’ between the two arguments, and left and right parentheses) to create a query that resembles a function call. This baseline approach creates simplistic fixed size queries for the network but does not utilize its full potential since the queries do not necessarily resemble actual code, nor correct code similar to the sequences in the training set for the embeddings. We will refer to this baseline as No-Context ELMo.
Our proposed method, we compute SCELMo embeddings to the language model all the tokens of the instances for which we need representations. Valid instances are functions calls that contain exactly two arguments and binary expressions. To create a fixed-size representation we extract only the features corresponding a fixed set of tokens. Specifically, for functions calls we use the representations corresponding to the first token of the expression on which the function is called, the function name, the first token of the first argument and the first token of the second argument. While, for binary expressions we use those of the first token of the left operand, the binary operator, and the first token of the right operand. Since the representations contain contextual information, the returned vectors can capture information about the rest of the tokens in the code sequence.
6 RESULTS
We next discuss the experiments we performed and their corresponding results. We measured the performance of the three baselines as well as those of non-contextual ELMO and SCELMO. Measuring the performance of non-contextual ELMO allows us to evaluate how much improvement is due to specifics of the language model architecture, such as the character convolutional layer which can handle OOVs, and how much is due to the contextual information itself.
6.1 PERFORMANCE ON VALIDATION SET
In our first experiment we evaluate the performance of the methods in tasks where training data from the same projects are available. The evaluation performed in this experiment gives a good estimation of how our method performs compared to the previous state-of-the-art technique of DeepBugs. One main difference however is that the evaluation now also includes instances which contain OOV. As a consequence the bug detections tasks are harder than those presented by Pradel & Sen (2018) as their evaluation does not include in both the training and validation set any instance for which an extracted identifier is OOV. Table 1 illustrates the performance of the baselines and our models. As one would expect the FastText baseline improves over Word2Vec for all bug types due to the subword information. Moreover, our model SCELMo massively outperforms all other methods. Lastly, even no-context ELMo the heuristic version of SCELMo that does not utilize contextual information at test time outperforms the baseline methods showcasing how powerful the pretrained representations are.
6.2 INCLUDING COMPLEX EXPRESSIONS
In our next experiment we also included instances that contain elements that are complex or nested expressions. For instance, in the original work if one the arguments of a function call or one of the operands of a binary expression is an expression consisting of other expressions then the instance would not be included in the dataset. Several AST node
types such as a NewExpression node or an ObjectExpressionwere not supported. Figure 2 a few examples of instances that would be previously skipped 2. Such instances were skipped by Pradel & Sen (2018) and not included in their results. We do note though that we still skip very long expressions that contain more than 1000 tokens.
Similarly to the previous experiment SCELMo significantly outperforms all other models. This is evident in Table 2. Lastly, we clarify that the results of this section should not be directly compared to those of the previous one as for this experiment the training set is also larger.
6.3 EXTERNAL TEST EVALUATION
The last experiment’s objective is to showcase how the various models would perform on unseen projects as this better illustrates the generalizability of the techniques. The configuration utilized is identical to that of the previous section. By looking at Table 3 one can notice that the baselines have a major drop in performance. This is a common finding in machine learning models of code, namely, that applying a trained model to a new software project is much more difficult than to a new file in the same project. In contrast, SCELMo offers up to 15% improvement in accuracy compared to Word2Vec baseline. In fact, impressively enough SCELMo on the external test set is better than the evaluation set one of the baselines.
6.4 OOV STATISTICS
In order to better understand the above results we measured the OOV rate of the basic elements of the code instances appearing in the dataset. Here the OOV rate is calculated based on the vocabulary of 10000 entries utilized by the Word2Vec and random baseline models. These are illustrated in Tables 4 and 5. We measured the OOV rates for both the version of the dataset used in Section 6.4, which we call Train and Validation, and that used in Section 6.2, which we call Extended Train and Extended Validation.
Tables 4 and 5 describe the OOV rates for different parts of the expression types that are considered by the DeepBugs bug detector. A detailed description of the identifiers extraction heuristic can be found in Appendix A. We first focus
2The AST is extracted using the acorn parser https://github.com/acornjs/acorn
on the swapped arguments bug pattern and consider all of the method call that have exactly two arguments. Each method call contains the function name, a name of the first argument, a name of the second argument, and a base object. The base object is the identifier that would be extracted from the expression (if such an expression exists) on which the function is called. For instance, from the following expression: window.navigator.userAgent.indexOf(”Chrome”), userAgent would be extracted as the base object. Table 4 shows for each of the components how often they are OOV. In the expanded version of the dataset if one of the arguments is a complex expression then it is converted into a name based on the heuristic described in Section A. The resulting statistics contain valuable information as for instance, it is almost impossible for the Word2Vec baseline to reason about a swap arguments bug if the identifiers extracted for both arguments are OOV.
In a similar manner for the incorrect operand and operator bug patterns we consider all the binary operations. Each binary expression consists of a left and right operand and a name is extracted for each of them. For each operand we also measured the frequency with which the operand corresponds to certain common types such as identifier, literal or a ThisExpression.
7 IS NEURAL BUG-FINDING USEFUL IN PRACTICE?
Although related work (Pradel & Sen, 2018; Allamanis et al., 2018b; Vasic et al., 2019) has shown that there is great potential for embedding based neural bug finders, the evaluation has mostly focused on synthetic bugs introduced by mutating the original code. However, there is no strong indication that the synthetic bugs correlate to real ones, apart from a small study of the top 50 warnings for each bug type produced by DeepBugs. A good example is the mutation operation utilized for the incorrect binary operator bug. A lot of the introduced bug instances could result in syntactic errors. This can potentially create a classifier with a high bias towards correlating buggy code to syntactically incorrect code, thus hindering the model’s ability to generalize on real bugs. Ideally, in an industrial environment we would like the resulting models to achieve a false positive rate of less than 10 % (Sadowski et al., 2015). Sadly, high true positive rates are not to be expected as well since static bug detectors were shown to be able to detect less than 5% of bugs
(Habib & Pradel, 2018) contained in the Defects4J corpus (Just et al., 2014) and less than 12% in a single-statement bugs corpus (Karampatsis & Sutton, 2019). We note that in the second case the static analysis tool is given credit by reported any warning for the buggy line, so the actual percentage might lower than the reported one.
We next make a first step on investigating the practical usefulness of our methods by applying the classifiers of the previous section on a small corpus of real JavaScript bugs. However, we think that this is a very hard yet interesting problem that should be carefully examined in future work. In order to mine a corpus of real bug changes we used the methodology described in (Karampatsis & Sutton, 2019). We note that we adapted their implementation to utilize the Rhino JavaScript parser3. Their methodology extracts bug fixing commits and filters them to only keep those that contain small single-statement changes. Finally, it classifies each pair of modified statements by whether the fit a set of mutation patterns. The resulting dataset is shown in Table 6. Upon acceptance of the paper we will release this dataset along with our implementation, the rest of data used, and the learned representations.
Finally, we queried the DeepBugs and SCELMo with each buggy instance as well as its fixed variant and measured the percentage of correctly classified instances for each of the two categories. We also ignored any instances for which the JavaScript parser utilized for both failed to extract an AST. We classified as bugs any instances that were assigned a probability to be a bug > 75%. In an actual system this threshold should ideally be tuned on a validation set.
Table 7 suggests that there might indeed be some potential for future practical applications of neural bug finding techniques. Both are able to uncover some of the bugs. However, the results also suggest that careful tuning of the predictions threshold might be necessary, especially if we take into account the industrial need to comply with a low false positive rate (FPR). For instance, raising SCELMo’s prediction threshold to 80% for the swap arguments bug results in finding only 3.34% of the bugs but correctly classifying 100% of the repaired function calls, thus achieving 0.0% false positive rate. Moreover, since SCELMo could not uncover any of the real binary operator bugs, future work could investigate the effect of utilizing different mutation strategies for the purpose of artificial bug-induction. Future work could also investigate if fine-tuning on small set of real bugs could result in more robust classifiers.
8 CONCLUSION
We have presented SCELMo, which is to our knowledge the first language-model based contextual embeddings for source code. Contextual embeddings have many potential advantages for source code, because surrounding tokens can indirectly provide information about tokens, e.g. about likely values of variables. We highlight the utility of SCELMo embeddings by using them within a recent state-of-the-art machine learning based bug detector. The SCELMo embeddings yield a dramatic improvement in the synthetic bug detection performance benchmark, especially on lines of code that contain out-of-vocabulary tokens and complex expressions that can cause difficulty for the method. We also showed and discussed the performance of the resulting bug detectors on a dataset of real bugs raising useful insights for future work.
3https://github.com/mozilla/rhino
A NAME EXTRACTION HEURISTIC
In order for DeepBugs to operate it is necessary to extract identifiers or literals for each expression part of the statement. The bug detector for swapped arguments utilizes the following elements of the function call:
Base Object: The expression on which the function is called. Callee: The called function. Argument 1: The expression consisting the first argument of the called function. Argument 2: The expression consisting the first argument of the called function.
Similarly the bug detectors for incorrect binary operators and operands utilize the following elements of the binary expression:
Binary Operator: The binary operator utilized in the expression. Left Operand: The left operand of the binary expression. Right Operand: The right operand of the binary expression.
We next describe the extraction heuristic, which is shared by all the bug detectors. The heuristic takes as input a node n representing an expression and returns name(n) based on the following rules:
• Identifier: return its name. • Literal: return its value. • this expression: return this. • Update expression with argument x: return name(x). • Member expression accessing a property p: return name(p). • Member expression accessing a property base[p]: return name(base). • Call expression base.callee(...): return name(callee). • Property node n: If n.key does not exist return name(n.value). If name(n.key) does not exist return name(n.value) . Otherwise randomly return either name(n.value) or name()n.key).
• Binary expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Logical expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Assignment expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise, randomly return either name(l) ir name(r).
• Unary expression with argument u : Return name(u). • Array expression with elements li : For all li that name(li) exists randomly choose one of them and return name(li).
• Conditional expression with operands c, l, and r: Randomly choose one out of c, l, r for which a name exists and return its name.
• Function expression: return function. • Object expression: return {. • New expression with a constructor function call c: return name(c).
All random decisions follow a uniform distribution. | 1. What is the main contribution of the paper regarding ELMO embeddings in bug detection?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the significance and impact of the improved precision in the first step of the DeepBugs tasks?
4. What are the limitations of the paper regarding its focus on the first classification problem and neglecting the second step of anomaly reporting?
5. How can the authors improve their work to make it more practical and useful for real-world applications? | Review | Review
The paper proposes to use ELMO embeddings to improve the precision on the first step of the DeepBugs tasks defined by Pradel and Sen (2018). This first step is an artificial problem created by taking real programs (with and without bugs, but assuming almost all of them are correct) and introducing bugs of certain type into the programs. Then, a classifier needs to distinguish between the real and the artificial set. This classifier is then to be used as a checker for anomalies in code and the anomalies are reported as bugs, however the paper skips this second step and only reports results on the first classification problem.
Technically, the paper improve this first step of DeepBugs by using a standard variant of ELMO. The evaluation is detailed, but the results are unsurprising. The paper simply tech-transfers the idea from NLP to Code. If this work is accepted at the conference, I cannot imagine an interesting presentation or a poster that simply cites the changed numbers. Did we expect ELMO to be worse than more naive or random embeddings?
The work and its results heavily peg on the DeepBugs and increases the precision of its first step by a significant margin, but does not show getting any more useful results. In fact, on one task (Wrong Binary Operator), SCELmo gets to 100% accuracy. This means it will never report any bugs, whereas DeepBugs seems to be performing best on exactly this kind of reports with its weaker model.
I would recommend the authors to either work on showing practical usefulness of the technique, showing something for the full bugfinding task (not merely the first, artificial part), or to investigate if (or how) the idea to add bug-introducing changes to a code dataset is conceptually flawed for bugfinding (as this idea is widely used by several other works like Allamanis et al 2018b or by https://arxiv.org/abs/1904.01720 which also don't get to practical tools ). There seems to be some indication of this by the reported 100% accuracy, but right now this remains completely uninvestigated.
Minor issues:
Listing 3: Opernad -> Operand
Page 5. There is no Table 6.1 |
ICLR | Title
SCELMo: Source Code Embeddings from Language Models
Abstract
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
1 INTRODUCTION
Learning rich representations for source code is an open problem that has the potential to enable software engineering and development tools. Some work on machine learning for source code has used hand engineered features (Long & Rinard, 2016, e.g.), but designing and implementing such features can be tedious and error-prone. For this reason, other work considers the task of learning a representation of source code from data (Allamanis et al., 2018a). Many models of source code are based on learned representations called embeddings, which transform words into a continuous vector space (Mikolov et al., 2013). Currently in software engineering (SE) researchers have used static embeddings (Harer et al., 2018; White et al., 2019; Pradel & Sen, 2018), which map a word to the same vector regardless of its context. However, recent work in natural language processing (NLP) has found that contextual embeddings can lead to better performance (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019). Contextualized embeddings assign a different vector to a word based on the context it is used. For NLP this has the advantage that it can model phenomena like polysemy. A natural question to ask is if these methods would also be beneficial for learning better SE representations.
In this paper, we introduce a new set of contextual embeddings for source code. Contextual embeddings have several potential modelling advantages that are specifically suited to modelling source code:
• Surrounding names contain important information about an identifier. For example, for a variable name, surrounding tokens might include functions that take that variable as an argument or assignments to the variable. These tokens provide indirect information about possible values the variable could take, and so should affect its representation. Even keywords can have very different meanings based on their context. For instance, a private function is not the same as a private variable or a private class (in the case of Java / C++).
• Contextual embeddings assign a different representation to a variable each time it is used in the program. By doing this, they can potentially capture how a variable’s value evolves through the program execution.
• Contextual embeddings enable the use of transfer learning. Pre-training a large neural language model and querying it for contextualized representations while simultaneously fine-tuning for the specific task is a very effective technique for supervised tasks for which there is a small amount of supervised data available. As a result only a small model needs to be fine-tuned atop the pre-trained model, without the need for task-specific architectures nor the need of training a large model for each task separately.
In this paper, we highlight the potential of contextual code embeddings for program repair. Automatically finding bugs in code is an important open problem in SE. Even simple bugs can be hard to spot and repair. A promising approach to this end is name-based bug detection, introduced by DeepBugs (Pradel & Sen, 2018). The current state-of-the-art in name-based bug detection relies on static representations from Word2Vec (Mikolov et al., 2013) to learn a classifier that distinguishes correct from incorrect code for a specific bug pattern. We introduce a new set of contextualized
embeddings for code and explore its usefulness on the task of name-based bug detection. Our method significantly outperforms DeepBugs as well as other static representations methods on both the DeepBugs dataset as well as a new previously unused test set of JavaScript projects. We release our implementation and representations as they could lead to improvements in a great variety of SE tasks.
2 RELATED WORK
Unsupervised static word embeddings have been extensively used to improve the accuracy of supervised tasks in NLP (Turian et al., 2010). Notable examples of such methods are Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). However, the above models learn only a single context-independent word representation. To overcome this problem some models (Wieting et al., 2016; Bojanowski et al., 2017) enhance the representations with subword information, which can also somewhat deal with out-of-vocabulary words. Another approach is to learn a different representation for every word sense (Neelakantan et al., 2014) but this requires knowing the set of word senses in advance. More recent methods overcome the above issues by learning contextualized embeddings. Melamud et al. (2016) encode the context surrounding a pivot word using a bidirectional LSTM. Peters et al. (2018) use a deep bidirectional LSTM, learning word embeddings as functions of its internal states, calling the method Embeddings using Language Models (ELMo). We discuss ELMo in detail in Section 3. Devlin et al. (2018) introduced bidirectional encoder representations from transformers (BERT). This method learns pre-trained contextual embeddings by jointly conditioning on left and right context via an attention mechanism.
Program repair is an important task in software engineering and programming languages. For a detailed review see Monperrus (2018); Gazzola et al. (2019). Many recent program repair methods are based on machine learning. Yin et al. (2018) learn to represent code edits using a gated graph neural network (GGNN) (Li et al., 2016). Allamanis et al. (2018b) learn to identify a particular class of bugs called variable misuse bugs, using a GGNN. Chen et al. (2019) introduce SequenceR which learns to transform buggy lines into fixed ones via machine translation. Our work is orthogonal to these approaches and can be used as input in other models.
Finally, our work is also related to code representation methods many of which have also been used in program repair. Harer et al. (2018) learn Word2Vec embeddings for C/C++ tokens to predict software vulnerabilities. White et al. (2019) learn Word2Vec embeddings for Java tokens and utilize them in program repair. Alon et al. (2019) learn code embeddings using abstract syntax tree paths. A more detailed overview can be found in (Allamanis et al., 2018a; Chen & Monperrus, 2019).
3 EMBEDDINGS FROM LANGUAGE MODELS (ELMO)
ELMo (Peters et al., 2018) computes word embeddings from the hidden states of a language model. Consequently, the embeddings of each token depend on its context of the input sequence, even out-of-vocabulary (OOV) tokens have effective input representations. In this section, we briefly describe the ELMo embeddings.
The first step is that a neural language model is trained to maximize the likelihood of a training corpus. The architecture used by ELMo a bidirectional LSTM with L layers and character convolutions in the input layer. Let the input be a sequence of tokens (t1, ...tN ). For each token tk, denote by xLMk the input representation from the character convolution. Consequently, this representation passes through L layers of forward and backward LSTMs. Then each layer j ∈ {1, ..., L} of the forward LSTM computes a hidden state −−→ hLMk,j , and likewise the hidden states of the backward LSTM are denoted by ←−− hLMk,j . The parameters for the token representation and for the output softmax layer are tied for both directions, while different parameters are learned for each direction of the LSTMs.
After the language model has been trained, we can use it within another downstream task by combining the hidden states of the language model from each LSTM layer. This process is called ELMo. For each token tk of a sentence in the test set, the language model computes 2L + 1 hidden states, one in each direction for each layer, and then the input layer. To make the following more compact, we can write these as hLMk,0 = x LM k for the input layer, and then hLMk,j = [ −−→ hLMk,j , ←−− hLMk,j ] for all of the other layers. The set of these vectors is
Rk = {hLMk,j |j = 0, ..., L}. (1)
To create the final representation that is fed to downstream tasks, ELMo collapses the set of representations into a single vector Ek for token tk. A simplistic approach is to only select the top layer, so that Ek = hLMk,L . A more general one, which we use in this work, is to combine the layers via fine-tuned task specific weights s = (s1 . . . sL) for every
layer. Then we can compute the embedding for token k as
Ek = γ L∑ j=0 sjh LM k,j , (2)
where γ is an additional scalar parameter that scales the entire vector. In our experiments we did not performed finetuning and thus used equal weights sj = 1/(L + 1) for each layer and γ = 1. However, our implementation also supports all the aforementioned ways of collapsing the set of representations.
A potential drawback of the method is that it still utilizes a softmax output layer with a fixed vocabulary that does not scale effectively and it still predicts UNK for OOV tokens which may have a negative effect on the representations.
4 SOURCE CODE ELMO
We describe Source Code ELMo (SCELMo), which trains ELMo on corpora of source code. However, we note that normally ELMo models in other domains are able to effectively utilize much larger representations. The code was tokenized using the esprima JavaScript tokenizer1. For training the ELMo model we used a corpus of 150,000 JavaScript Files (Raychev et al. 2016) consisting of various open-source projects. This corpus has previously been used on several tasks (Raychev et al., 2016; Pradel & Sen, 2018; Bavishi et al., 2018). We applied the patch released by Allamanis et al. (2018a) to filter out code duplication as this phenomenon was shown on this and other corpora to result in inflation of performance metrics. This resulted in 64750 training files and 33229 validation files. Since the validation set contains files from the same projects as the train the contained instances might be too similar and unrealistic overestimating. To address this we also created a test set of 500 random JavaScript projects sampled from the top 20,000 open-source JavaScript projects as of May 2019. The test corpus has not been previously utilized in previous work and is a better reflection of the performance of the learned bug detectors. Lastly, it is important to know what the performance of the method will be if we do not have access to training data from the projects on which we would like to find bugs. This is common in practice for many real case scenarios. For training the ELMo model, we use an embedding size of 100 features for each of the forward and backward LSTMs so that each layer sums up to 200 features.
5 CONTEXTUAL EMBEDDINGS FOR PROGRAM REPAIR
In this section, we describe how contextual embeddings can be incorporated within a recent machine learning-based bug detection system, the DeepBugs system of Pradel & Sen (2018). In the first part of this section, we give background about the DeepBugs system, and then we describe how we incorporate SCELMo within DeepBugs. DeepBugs treats the problem of finding a bug as a classification problem. The system considers a set of specific bug types, which are small mistakes that might be made in a program, such as swapping two arguments. For each bug type, DeepBugs trains a binary classifier that takes a program statement as input and predicts whether the statement contains that type of bug. At test time, this classifier can be run for every statement in the program to attempt to detect bugs.
In order to train the model both examples of correct and incorrect (buggy) code are necessary. DeepBugs treats the existing code as correct and randomly mutates it to obtain buggy code. To obtain training examples, we extract all expressions from the source code which are either the function calls with exactly two arguments and all binary expressions. To create instances of buggy code we mutate each of the correct instances. As such, arguments in function calls are swapped, the binary operator in binary expressions is replaced with another random one, and finally randomly either the left or the right operand is replaced by another random binary operand that appears in the same file. Then the classification task is a binary task to predict whether the instance is correct, i.e., it comes from the original code, or whether it is buggy, i.e. it was one of the randomly mutated examples. The validation and test sets are mutated in the same way as the training set. The split between correct and buggy instances has 50/50 class distribution as for each original code instance exactly one mutated buggy counterpart is created.
The architecture for the classifier is a feedforward network with a single hidden layer of 200 dimensions with Relu activations and a sigmoid output layer. For both the input and hidden layers a dropout of 0.2. The network was trained in all experiments for 10 epochs with a batch size of 50 and the RMSProp optimizer. We note that for maintaining a consistent comparison with DeepBugs we kept all the above parameters as well as the optimizer’s parameters fixed to the values reported in Pradel & Sen (2018). Tuning these parameters would probably result in at least a small performance increase for our method.
1https://esprima.org/
In our experiments, we consider three bug types that address a set of common programming mistakes: swapped arguments of function calls, using the wrong binary operator and using an incorrect binary operand in a binary expression. The methodology can easily be applied to other bug types. Figure 1 illustrates an example of each of the three bug types.
5.1 INPUT TO THE CLASSIFIER
A key question is how a statement from the source code is converted into a feature vector that can be used within the classifier. DeepBugs uses a set of heuristics that, given a statement and a bug type, return a sequence of identifiers from the statement that are most likely to be relevant. For instance, for the call to setTimeout in Listing 1 the following sequence of identifiers would be extracted: [setTimeout, delay, function]. A detailed description of the heuristics is available in Appendix A.
These heuristics result in a sequence of program identifiers. These are converted to continuous vectors using word embeddings, concatenated, and this is the input to the classifier. DeepBugs uses Word2Vec embeddings trained on a corpus of code. In our experiments, we train classifiers using three different types of word embeddings. First, we kept the 10,000 most frequent identifiers/literals and assigned to each of them a random embedding of 200 features. Second, to reproduce the results of Pradel & Sen (2018), we use the CBOW variant of Word2Vec to learn representations consisting of 200 features for the 10,000 most frequent identifiers/literals. Finally, we train a FastText embeddings (Bojanowski et al., 2017) on the training set to learn identifier embeddings that contain subword information. The subwords used by FastText are all the character trigrams that appear in the training corpus. Identifiers are therefore composed of multiple subwords. To represent an identifier, we sum the embeddings of each of its subwords and summing them up. This allows the identifier embeddings to contain information about the structure and morphology of identifiers. This also allows the FastText embeddings, unlike the Word2Vec ones, to represent OOV words as a combination of character trigrams.
Note that DeepBugs can detect bugs only in statements that do not contain OOV (out-of-vocabulary) identifiers, because its Word2Vec embeddings cannot extract features for OOV names. Instead our implementation does not skip such instances. Since the original work discarded any instances that contain OOV identifiers we neither know how the method performs on such instances nor how often those appear in the utilized dataset of DeepBugs. Moreover, DeepBugs supported only a specific subset of AST nodes and skipped the rest. For example if a call’s argument is a complex expression consisting of other expressions then the call would be skipped. However, we expanded the implementation to support all kinds of AST nodes and to not skip instances with nested expressions as discussed in Appendix A. We note that we still skip an instance if one of its main parts (e.g., a function call’s argument) is a complex expression longer than 1,000 characters as such expressions might be overly long to reason about.
5.2 CONNECTING SCELMO TO THE BUG DETECTOR
We investigated two variants of the bug detection model, which query SCELMo in different ways to get features for the classifier. The first utilizes the heuristic of Section A to extract a small set of identifiers or literals that represent the code piece. For example, for an incorrect binary operand instance we extract one identifier or literal for the left and right operands respectively, and we also extract its binary operator. Then, those are concatenated to form a query to the network. In the case of function calls we extract the identifier corresponding to the name of the called function, one identifier or literal for the first and second argument respectively and an identifiers for the expression on which the function is called. We also add the appropriate syntax tokens (a ’.’ if necessary, ’,’ between the two arguments, and left and right parentheses) to create a query that resembles a function call. This baseline approach creates simplistic fixed size queries for the network but does not utilize its full potential since the queries do not necessarily resemble actual code, nor correct code similar to the sequences in the training set for the embeddings. We will refer to this baseline as No-Context ELMo.
Our proposed method, we compute SCELMo embeddings to the language model all the tokens of the instances for which we need representations. Valid instances are functions calls that contain exactly two arguments and binary expressions. To create a fixed-size representation we extract only the features corresponding a fixed set of tokens. Specifically, for functions calls we use the representations corresponding to the first token of the expression on which the function is called, the function name, the first token of the first argument and the first token of the second argument. While, for binary expressions we use those of the first token of the left operand, the binary operator, and the first token of the right operand. Since the representations contain contextual information, the returned vectors can capture information about the rest of the tokens in the code sequence.
6 RESULTS
We next discuss the experiments we performed and their corresponding results. We measured the performance of the three baselines as well as those of non-contextual ELMO and SCELMO. Measuring the performance of non-contextual ELMO allows us to evaluate how much improvement is due to specifics of the language model architecture, such as the character convolutional layer which can handle OOVs, and how much is due to the contextual information itself.
6.1 PERFORMANCE ON VALIDATION SET
In our first experiment we evaluate the performance of the methods in tasks where training data from the same projects are available. The evaluation performed in this experiment gives a good estimation of how our method performs compared to the previous state-of-the-art technique of DeepBugs. One main difference however is that the evaluation now also includes instances which contain OOV. As a consequence the bug detections tasks are harder than those presented by Pradel & Sen (2018) as their evaluation does not include in both the training and validation set any instance for which an extracted identifier is OOV. Table 1 illustrates the performance of the baselines and our models. As one would expect the FastText baseline improves over Word2Vec for all bug types due to the subword information. Moreover, our model SCELMo massively outperforms all other methods. Lastly, even no-context ELMo the heuristic version of SCELMo that does not utilize contextual information at test time outperforms the baseline methods showcasing how powerful the pretrained representations are.
6.2 INCLUDING COMPLEX EXPRESSIONS
In our next experiment we also included instances that contain elements that are complex or nested expressions. For instance, in the original work if one the arguments of a function call or one of the operands of a binary expression is an expression consisting of other expressions then the instance would not be included in the dataset. Several AST node
types such as a NewExpression node or an ObjectExpressionwere not supported. Figure 2 a few examples of instances that would be previously skipped 2. Such instances were skipped by Pradel & Sen (2018) and not included in their results. We do note though that we still skip very long expressions that contain more than 1000 tokens.
Similarly to the previous experiment SCELMo significantly outperforms all other models. This is evident in Table 2. Lastly, we clarify that the results of this section should not be directly compared to those of the previous one as for this experiment the training set is also larger.
6.3 EXTERNAL TEST EVALUATION
The last experiment’s objective is to showcase how the various models would perform on unseen projects as this better illustrates the generalizability of the techniques. The configuration utilized is identical to that of the previous section. By looking at Table 3 one can notice that the baselines have a major drop in performance. This is a common finding in machine learning models of code, namely, that applying a trained model to a new software project is much more difficult than to a new file in the same project. In contrast, SCELMo offers up to 15% improvement in accuracy compared to Word2Vec baseline. In fact, impressively enough SCELMo on the external test set is better than the evaluation set one of the baselines.
6.4 OOV STATISTICS
In order to better understand the above results we measured the OOV rate of the basic elements of the code instances appearing in the dataset. Here the OOV rate is calculated based on the vocabulary of 10000 entries utilized by the Word2Vec and random baseline models. These are illustrated in Tables 4 and 5. We measured the OOV rates for both the version of the dataset used in Section 6.4, which we call Train and Validation, and that used in Section 6.2, which we call Extended Train and Extended Validation.
Tables 4 and 5 describe the OOV rates for different parts of the expression types that are considered by the DeepBugs bug detector. A detailed description of the identifiers extraction heuristic can be found in Appendix A. We first focus
2The AST is extracted using the acorn parser https://github.com/acornjs/acorn
on the swapped arguments bug pattern and consider all of the method call that have exactly two arguments. Each method call contains the function name, a name of the first argument, a name of the second argument, and a base object. The base object is the identifier that would be extracted from the expression (if such an expression exists) on which the function is called. For instance, from the following expression: window.navigator.userAgent.indexOf(”Chrome”), userAgent would be extracted as the base object. Table 4 shows for each of the components how often they are OOV. In the expanded version of the dataset if one of the arguments is a complex expression then it is converted into a name based on the heuristic described in Section A. The resulting statistics contain valuable information as for instance, it is almost impossible for the Word2Vec baseline to reason about a swap arguments bug if the identifiers extracted for both arguments are OOV.
In a similar manner for the incorrect operand and operator bug patterns we consider all the binary operations. Each binary expression consists of a left and right operand and a name is extracted for each of them. For each operand we also measured the frequency with which the operand corresponds to certain common types such as identifier, literal or a ThisExpression.
7 IS NEURAL BUG-FINDING USEFUL IN PRACTICE?
Although related work (Pradel & Sen, 2018; Allamanis et al., 2018b; Vasic et al., 2019) has shown that there is great potential for embedding based neural bug finders, the evaluation has mostly focused on synthetic bugs introduced by mutating the original code. However, there is no strong indication that the synthetic bugs correlate to real ones, apart from a small study of the top 50 warnings for each bug type produced by DeepBugs. A good example is the mutation operation utilized for the incorrect binary operator bug. A lot of the introduced bug instances could result in syntactic errors. This can potentially create a classifier with a high bias towards correlating buggy code to syntactically incorrect code, thus hindering the model’s ability to generalize on real bugs. Ideally, in an industrial environment we would like the resulting models to achieve a false positive rate of less than 10 % (Sadowski et al., 2015). Sadly, high true positive rates are not to be expected as well since static bug detectors were shown to be able to detect less than 5% of bugs
(Habib & Pradel, 2018) contained in the Defects4J corpus (Just et al., 2014) and less than 12% in a single-statement bugs corpus (Karampatsis & Sutton, 2019). We note that in the second case the static analysis tool is given credit by reported any warning for the buggy line, so the actual percentage might lower than the reported one.
We next make a first step on investigating the practical usefulness of our methods by applying the classifiers of the previous section on a small corpus of real JavaScript bugs. However, we think that this is a very hard yet interesting problem that should be carefully examined in future work. In order to mine a corpus of real bug changes we used the methodology described in (Karampatsis & Sutton, 2019). We note that we adapted their implementation to utilize the Rhino JavaScript parser3. Their methodology extracts bug fixing commits and filters them to only keep those that contain small single-statement changes. Finally, it classifies each pair of modified statements by whether the fit a set of mutation patterns. The resulting dataset is shown in Table 6. Upon acceptance of the paper we will release this dataset along with our implementation, the rest of data used, and the learned representations.
Finally, we queried the DeepBugs and SCELMo with each buggy instance as well as its fixed variant and measured the percentage of correctly classified instances for each of the two categories. We also ignored any instances for which the JavaScript parser utilized for both failed to extract an AST. We classified as bugs any instances that were assigned a probability to be a bug > 75%. In an actual system this threshold should ideally be tuned on a validation set.
Table 7 suggests that there might indeed be some potential for future practical applications of neural bug finding techniques. Both are able to uncover some of the bugs. However, the results also suggest that careful tuning of the predictions threshold might be necessary, especially if we take into account the industrial need to comply with a low false positive rate (FPR). For instance, raising SCELMo’s prediction threshold to 80% for the swap arguments bug results in finding only 3.34% of the bugs but correctly classifying 100% of the repaired function calls, thus achieving 0.0% false positive rate. Moreover, since SCELMo could not uncover any of the real binary operator bugs, future work could investigate the effect of utilizing different mutation strategies for the purpose of artificial bug-induction. Future work could also investigate if fine-tuning on small set of real bugs could result in more robust classifiers.
8 CONCLUSION
We have presented SCELMo, which is to our knowledge the first language-model based contextual embeddings for source code. Contextual embeddings have many potential advantages for source code, because surrounding tokens can indirectly provide information about tokens, e.g. about likely values of variables. We highlight the utility of SCELMo embeddings by using them within a recent state-of-the-art machine learning based bug detector. The SCELMo embeddings yield a dramatic improvement in the synthetic bug detection performance benchmark, especially on lines of code that contain out-of-vocabulary tokens and complex expressions that can cause difficulty for the method. We also showed and discussed the performance of the resulting bug detectors on a dataset of real bugs raising useful insights for future work.
3https://github.com/mozilla/rhino
A NAME EXTRACTION HEURISTIC
In order for DeepBugs to operate it is necessary to extract identifiers or literals for each expression part of the statement. The bug detector for swapped arguments utilizes the following elements of the function call:
Base Object: The expression on which the function is called. Callee: The called function. Argument 1: The expression consisting the first argument of the called function. Argument 2: The expression consisting the first argument of the called function.
Similarly the bug detectors for incorrect binary operators and operands utilize the following elements of the binary expression:
Binary Operator: The binary operator utilized in the expression. Left Operand: The left operand of the binary expression. Right Operand: The right operand of the binary expression.
We next describe the extraction heuristic, which is shared by all the bug detectors. The heuristic takes as input a node n representing an expression and returns name(n) based on the following rules:
• Identifier: return its name. • Literal: return its value. • this expression: return this. • Update expression with argument x: return name(x). • Member expression accessing a property p: return name(p). • Member expression accessing a property base[p]: return name(base). • Call expression base.callee(...): return name(callee). • Property node n: If n.key does not exist return name(n.value). If name(n.key) does not exist return name(n.value) . Otherwise randomly return either name(n.value) or name()n.key).
• Binary expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Logical expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Assignment expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise, randomly return either name(l) ir name(r).
• Unary expression with argument u : Return name(u). • Array expression with elements li : For all li that name(li) exists randomly choose one of them and return name(li).
• Conditional expression with operands c, l, and r: Randomly choose one out of c, l, r for which a name exists and return its name.
• Function expression: return function. • Object expression: return {. • New expression with a constructor function call c: return name(c).
All random decisions follow a uniform distribution. | 1. What is the focus of the paper regarding source code embedding?
2. What are the strengths of the proposed approach, particularly in leveraging ELMo?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This paper leverage recent advances of ELMo in context embedding and apply it in the source code embedding. With the help of ELMo, source embedding can take the three benefits: (1) Surrounding names provide indirect information about possible values the variable could take; (2) an variable’s value evolves through the program execution can be captured; (3) open a gate for the reuse of the ptr-trained model. To evaluate the effectiveness of the proposed approach, authors conduct experiments on the downstream task of the bug detection.
Pros:
1. This work study an interesting problem, which is challenging to solve.
2. The application and combination of different techniques in this paper are smart.
3. The experiment results show better performance of contextual embedding based method compared with non-contextual embedding based methods.
Cons:
1. It is a good application of known techniques, but the novelty is limited.
2. It is suggested to evaluate the effectiveness of the proposed approach on various source code analysis task such as variable misuse.
3. It is suggested to compare with other state-of-the-art baseline methods, e.g. BERT.
4. In the end of the introduction section, the authors claim that "we release our implementation and representation...". However, implementation, representation and dataset are missing. |
ICLR | Title
SCELMo: Source Code Embeddings from Language Models
Abstract
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
1 INTRODUCTION
Learning rich representations for source code is an open problem that has the potential to enable software engineering and development tools. Some work on machine learning for source code has used hand engineered features (Long & Rinard, 2016, e.g.), but designing and implementing such features can be tedious and error-prone. For this reason, other work considers the task of learning a representation of source code from data (Allamanis et al., 2018a). Many models of source code are based on learned representations called embeddings, which transform words into a continuous vector space (Mikolov et al., 2013). Currently in software engineering (SE) researchers have used static embeddings (Harer et al., 2018; White et al., 2019; Pradel & Sen, 2018), which map a word to the same vector regardless of its context. However, recent work in natural language processing (NLP) has found that contextual embeddings can lead to better performance (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019). Contextualized embeddings assign a different vector to a word based on the context it is used. For NLP this has the advantage that it can model phenomena like polysemy. A natural question to ask is if these methods would also be beneficial for learning better SE representations.
In this paper, we introduce a new set of contextual embeddings for source code. Contextual embeddings have several potential modelling advantages that are specifically suited to modelling source code:
• Surrounding names contain important information about an identifier. For example, for a variable name, surrounding tokens might include functions that take that variable as an argument or assignments to the variable. These tokens provide indirect information about possible values the variable could take, and so should affect its representation. Even keywords can have very different meanings based on their context. For instance, a private function is not the same as a private variable or a private class (in the case of Java / C++).
• Contextual embeddings assign a different representation to a variable each time it is used in the program. By doing this, they can potentially capture how a variable’s value evolves through the program execution.
• Contextual embeddings enable the use of transfer learning. Pre-training a large neural language model and querying it for contextualized representations while simultaneously fine-tuning for the specific task is a very effective technique for supervised tasks for which there is a small amount of supervised data available. As a result only a small model needs to be fine-tuned atop the pre-trained model, without the need for task-specific architectures nor the need of training a large model for each task separately.
In this paper, we highlight the potential of contextual code embeddings for program repair. Automatically finding bugs in code is an important open problem in SE. Even simple bugs can be hard to spot and repair. A promising approach to this end is name-based bug detection, introduced by DeepBugs (Pradel & Sen, 2018). The current state-of-the-art in name-based bug detection relies on static representations from Word2Vec (Mikolov et al., 2013) to learn a classifier that distinguishes correct from incorrect code for a specific bug pattern. We introduce a new set of contextualized
embeddings for code and explore its usefulness on the task of name-based bug detection. Our method significantly outperforms DeepBugs as well as other static representations methods on both the DeepBugs dataset as well as a new previously unused test set of JavaScript projects. We release our implementation and representations as they could lead to improvements in a great variety of SE tasks.
2 RELATED WORK
Unsupervised static word embeddings have been extensively used to improve the accuracy of supervised tasks in NLP (Turian et al., 2010). Notable examples of such methods are Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). However, the above models learn only a single context-independent word representation. To overcome this problem some models (Wieting et al., 2016; Bojanowski et al., 2017) enhance the representations with subword information, which can also somewhat deal with out-of-vocabulary words. Another approach is to learn a different representation for every word sense (Neelakantan et al., 2014) but this requires knowing the set of word senses in advance. More recent methods overcome the above issues by learning contextualized embeddings. Melamud et al. (2016) encode the context surrounding a pivot word using a bidirectional LSTM. Peters et al. (2018) use a deep bidirectional LSTM, learning word embeddings as functions of its internal states, calling the method Embeddings using Language Models (ELMo). We discuss ELMo in detail in Section 3. Devlin et al. (2018) introduced bidirectional encoder representations from transformers (BERT). This method learns pre-trained contextual embeddings by jointly conditioning on left and right context via an attention mechanism.
Program repair is an important task in software engineering and programming languages. For a detailed review see Monperrus (2018); Gazzola et al. (2019). Many recent program repair methods are based on machine learning. Yin et al. (2018) learn to represent code edits using a gated graph neural network (GGNN) (Li et al., 2016). Allamanis et al. (2018b) learn to identify a particular class of bugs called variable misuse bugs, using a GGNN. Chen et al. (2019) introduce SequenceR which learns to transform buggy lines into fixed ones via machine translation. Our work is orthogonal to these approaches and can be used as input in other models.
Finally, our work is also related to code representation methods many of which have also been used in program repair. Harer et al. (2018) learn Word2Vec embeddings for C/C++ tokens to predict software vulnerabilities. White et al. (2019) learn Word2Vec embeddings for Java tokens and utilize them in program repair. Alon et al. (2019) learn code embeddings using abstract syntax tree paths. A more detailed overview can be found in (Allamanis et al., 2018a; Chen & Monperrus, 2019).
3 EMBEDDINGS FROM LANGUAGE MODELS (ELMO)
ELMo (Peters et al., 2018) computes word embeddings from the hidden states of a language model. Consequently, the embeddings of each token depend on its context of the input sequence, even out-of-vocabulary (OOV) tokens have effective input representations. In this section, we briefly describe the ELMo embeddings.
The first step is that a neural language model is trained to maximize the likelihood of a training corpus. The architecture used by ELMo a bidirectional LSTM with L layers and character convolutions in the input layer. Let the input be a sequence of tokens (t1, ...tN ). For each token tk, denote by xLMk the input representation from the character convolution. Consequently, this representation passes through L layers of forward and backward LSTMs. Then each layer j ∈ {1, ..., L} of the forward LSTM computes a hidden state −−→ hLMk,j , and likewise the hidden states of the backward LSTM are denoted by ←−− hLMk,j . The parameters for the token representation and for the output softmax layer are tied for both directions, while different parameters are learned for each direction of the LSTMs.
After the language model has been trained, we can use it within another downstream task by combining the hidden states of the language model from each LSTM layer. This process is called ELMo. For each token tk of a sentence in the test set, the language model computes 2L + 1 hidden states, one in each direction for each layer, and then the input layer. To make the following more compact, we can write these as hLMk,0 = x LM k for the input layer, and then hLMk,j = [ −−→ hLMk,j , ←−− hLMk,j ] for all of the other layers. The set of these vectors is
Rk = {hLMk,j |j = 0, ..., L}. (1)
To create the final representation that is fed to downstream tasks, ELMo collapses the set of representations into a single vector Ek for token tk. A simplistic approach is to only select the top layer, so that Ek = hLMk,L . A more general one, which we use in this work, is to combine the layers via fine-tuned task specific weights s = (s1 . . . sL) for every
layer. Then we can compute the embedding for token k as
Ek = γ L∑ j=0 sjh LM k,j , (2)
where γ is an additional scalar parameter that scales the entire vector. In our experiments we did not performed finetuning and thus used equal weights sj = 1/(L + 1) for each layer and γ = 1. However, our implementation also supports all the aforementioned ways of collapsing the set of representations.
A potential drawback of the method is that it still utilizes a softmax output layer with a fixed vocabulary that does not scale effectively and it still predicts UNK for OOV tokens which may have a negative effect on the representations.
4 SOURCE CODE ELMO
We describe Source Code ELMo (SCELMo), which trains ELMo on corpora of source code. However, we note that normally ELMo models in other domains are able to effectively utilize much larger representations. The code was tokenized using the esprima JavaScript tokenizer1. For training the ELMo model we used a corpus of 150,000 JavaScript Files (Raychev et al. 2016) consisting of various open-source projects. This corpus has previously been used on several tasks (Raychev et al., 2016; Pradel & Sen, 2018; Bavishi et al., 2018). We applied the patch released by Allamanis et al. (2018a) to filter out code duplication as this phenomenon was shown on this and other corpora to result in inflation of performance metrics. This resulted in 64750 training files and 33229 validation files. Since the validation set contains files from the same projects as the train the contained instances might be too similar and unrealistic overestimating. To address this we also created a test set of 500 random JavaScript projects sampled from the top 20,000 open-source JavaScript projects as of May 2019. The test corpus has not been previously utilized in previous work and is a better reflection of the performance of the learned bug detectors. Lastly, it is important to know what the performance of the method will be if we do not have access to training data from the projects on which we would like to find bugs. This is common in practice for many real case scenarios. For training the ELMo model, we use an embedding size of 100 features for each of the forward and backward LSTMs so that each layer sums up to 200 features.
5 CONTEXTUAL EMBEDDINGS FOR PROGRAM REPAIR
In this section, we describe how contextual embeddings can be incorporated within a recent machine learning-based bug detection system, the DeepBugs system of Pradel & Sen (2018). In the first part of this section, we give background about the DeepBugs system, and then we describe how we incorporate SCELMo within DeepBugs. DeepBugs treats the problem of finding a bug as a classification problem. The system considers a set of specific bug types, which are small mistakes that might be made in a program, such as swapping two arguments. For each bug type, DeepBugs trains a binary classifier that takes a program statement as input and predicts whether the statement contains that type of bug. At test time, this classifier can be run for every statement in the program to attempt to detect bugs.
In order to train the model both examples of correct and incorrect (buggy) code are necessary. DeepBugs treats the existing code as correct and randomly mutates it to obtain buggy code. To obtain training examples, we extract all expressions from the source code which are either the function calls with exactly two arguments and all binary expressions. To create instances of buggy code we mutate each of the correct instances. As such, arguments in function calls are swapped, the binary operator in binary expressions is replaced with another random one, and finally randomly either the left or the right operand is replaced by another random binary operand that appears in the same file. Then the classification task is a binary task to predict whether the instance is correct, i.e., it comes from the original code, or whether it is buggy, i.e. it was one of the randomly mutated examples. The validation and test sets are mutated in the same way as the training set. The split between correct and buggy instances has 50/50 class distribution as for each original code instance exactly one mutated buggy counterpart is created.
The architecture for the classifier is a feedforward network with a single hidden layer of 200 dimensions with Relu activations and a sigmoid output layer. For both the input and hidden layers a dropout of 0.2. The network was trained in all experiments for 10 epochs with a batch size of 50 and the RMSProp optimizer. We note that for maintaining a consistent comparison with DeepBugs we kept all the above parameters as well as the optimizer’s parameters fixed to the values reported in Pradel & Sen (2018). Tuning these parameters would probably result in at least a small performance increase for our method.
1https://esprima.org/
In our experiments, we consider three bug types that address a set of common programming mistakes: swapped arguments of function calls, using the wrong binary operator and using an incorrect binary operand in a binary expression. The methodology can easily be applied to other bug types. Figure 1 illustrates an example of each of the three bug types.
5.1 INPUT TO THE CLASSIFIER
A key question is how a statement from the source code is converted into a feature vector that can be used within the classifier. DeepBugs uses a set of heuristics that, given a statement and a bug type, return a sequence of identifiers from the statement that are most likely to be relevant. For instance, for the call to setTimeout in Listing 1 the following sequence of identifiers would be extracted: [setTimeout, delay, function]. A detailed description of the heuristics is available in Appendix A.
These heuristics result in a sequence of program identifiers. These are converted to continuous vectors using word embeddings, concatenated, and this is the input to the classifier. DeepBugs uses Word2Vec embeddings trained on a corpus of code. In our experiments, we train classifiers using three different types of word embeddings. First, we kept the 10,000 most frequent identifiers/literals and assigned to each of them a random embedding of 200 features. Second, to reproduce the results of Pradel & Sen (2018), we use the CBOW variant of Word2Vec to learn representations consisting of 200 features for the 10,000 most frequent identifiers/literals. Finally, we train a FastText embeddings (Bojanowski et al., 2017) on the training set to learn identifier embeddings that contain subword information. The subwords used by FastText are all the character trigrams that appear in the training corpus. Identifiers are therefore composed of multiple subwords. To represent an identifier, we sum the embeddings of each of its subwords and summing them up. This allows the identifier embeddings to contain information about the structure and morphology of identifiers. This also allows the FastText embeddings, unlike the Word2Vec ones, to represent OOV words as a combination of character trigrams.
Note that DeepBugs can detect bugs only in statements that do not contain OOV (out-of-vocabulary) identifiers, because its Word2Vec embeddings cannot extract features for OOV names. Instead our implementation does not skip such instances. Since the original work discarded any instances that contain OOV identifiers we neither know how the method performs on such instances nor how often those appear in the utilized dataset of DeepBugs. Moreover, DeepBugs supported only a specific subset of AST nodes and skipped the rest. For example if a call’s argument is a complex expression consisting of other expressions then the call would be skipped. However, we expanded the implementation to support all kinds of AST nodes and to not skip instances with nested expressions as discussed in Appendix A. We note that we still skip an instance if one of its main parts (e.g., a function call’s argument) is a complex expression longer than 1,000 characters as such expressions might be overly long to reason about.
5.2 CONNECTING SCELMO TO THE BUG DETECTOR
We investigated two variants of the bug detection model, which query SCELMo in different ways to get features for the classifier. The first utilizes the heuristic of Section A to extract a small set of identifiers or literals that represent the code piece. For example, for an incorrect binary operand instance we extract one identifier or literal for the left and right operands respectively, and we also extract its binary operator. Then, those are concatenated to form a query to the network. In the case of function calls we extract the identifier corresponding to the name of the called function, one identifier or literal for the first and second argument respectively and an identifiers for the expression on which the function is called. We also add the appropriate syntax tokens (a ’.’ if necessary, ’,’ between the two arguments, and left and right parentheses) to create a query that resembles a function call. This baseline approach creates simplistic fixed size queries for the network but does not utilize its full potential since the queries do not necessarily resemble actual code, nor correct code similar to the sequences in the training set for the embeddings. We will refer to this baseline as No-Context ELMo.
Our proposed method, we compute SCELMo embeddings to the language model all the tokens of the instances for which we need representations. Valid instances are functions calls that contain exactly two arguments and binary expressions. To create a fixed-size representation we extract only the features corresponding a fixed set of tokens. Specifically, for functions calls we use the representations corresponding to the first token of the expression on which the function is called, the function name, the first token of the first argument and the first token of the second argument. While, for binary expressions we use those of the first token of the left operand, the binary operator, and the first token of the right operand. Since the representations contain contextual information, the returned vectors can capture information about the rest of the tokens in the code sequence.
6 RESULTS
We next discuss the experiments we performed and their corresponding results. We measured the performance of the three baselines as well as those of non-contextual ELMO and SCELMO. Measuring the performance of non-contextual ELMO allows us to evaluate how much improvement is due to specifics of the language model architecture, such as the character convolutional layer which can handle OOVs, and how much is due to the contextual information itself.
6.1 PERFORMANCE ON VALIDATION SET
In our first experiment we evaluate the performance of the methods in tasks where training data from the same projects are available. The evaluation performed in this experiment gives a good estimation of how our method performs compared to the previous state-of-the-art technique of DeepBugs. One main difference however is that the evaluation now also includes instances which contain OOV. As a consequence the bug detections tasks are harder than those presented by Pradel & Sen (2018) as their evaluation does not include in both the training and validation set any instance for which an extracted identifier is OOV. Table 1 illustrates the performance of the baselines and our models. As one would expect the FastText baseline improves over Word2Vec for all bug types due to the subword information. Moreover, our model SCELMo massively outperforms all other methods. Lastly, even no-context ELMo the heuristic version of SCELMo that does not utilize contextual information at test time outperforms the baseline methods showcasing how powerful the pretrained representations are.
6.2 INCLUDING COMPLEX EXPRESSIONS
In our next experiment we also included instances that contain elements that are complex or nested expressions. For instance, in the original work if one the arguments of a function call or one of the operands of a binary expression is an expression consisting of other expressions then the instance would not be included in the dataset. Several AST node
types such as a NewExpression node or an ObjectExpressionwere not supported. Figure 2 a few examples of instances that would be previously skipped 2. Such instances were skipped by Pradel & Sen (2018) and not included in their results. We do note though that we still skip very long expressions that contain more than 1000 tokens.
Similarly to the previous experiment SCELMo significantly outperforms all other models. This is evident in Table 2. Lastly, we clarify that the results of this section should not be directly compared to those of the previous one as for this experiment the training set is also larger.
6.3 EXTERNAL TEST EVALUATION
The last experiment’s objective is to showcase how the various models would perform on unseen projects as this better illustrates the generalizability of the techniques. The configuration utilized is identical to that of the previous section. By looking at Table 3 one can notice that the baselines have a major drop in performance. This is a common finding in machine learning models of code, namely, that applying a trained model to a new software project is much more difficult than to a new file in the same project. In contrast, SCELMo offers up to 15% improvement in accuracy compared to Word2Vec baseline. In fact, impressively enough SCELMo on the external test set is better than the evaluation set one of the baselines.
6.4 OOV STATISTICS
In order to better understand the above results we measured the OOV rate of the basic elements of the code instances appearing in the dataset. Here the OOV rate is calculated based on the vocabulary of 10000 entries utilized by the Word2Vec and random baseline models. These are illustrated in Tables 4 and 5. We measured the OOV rates for both the version of the dataset used in Section 6.4, which we call Train and Validation, and that used in Section 6.2, which we call Extended Train and Extended Validation.
Tables 4 and 5 describe the OOV rates for different parts of the expression types that are considered by the DeepBugs bug detector. A detailed description of the identifiers extraction heuristic can be found in Appendix A. We first focus
2The AST is extracted using the acorn parser https://github.com/acornjs/acorn
on the swapped arguments bug pattern and consider all of the method call that have exactly two arguments. Each method call contains the function name, a name of the first argument, a name of the second argument, and a base object. The base object is the identifier that would be extracted from the expression (if such an expression exists) on which the function is called. For instance, from the following expression: window.navigator.userAgent.indexOf(”Chrome”), userAgent would be extracted as the base object. Table 4 shows for each of the components how often they are OOV. In the expanded version of the dataset if one of the arguments is a complex expression then it is converted into a name based on the heuristic described in Section A. The resulting statistics contain valuable information as for instance, it is almost impossible for the Word2Vec baseline to reason about a swap arguments bug if the identifiers extracted for both arguments are OOV.
In a similar manner for the incorrect operand and operator bug patterns we consider all the binary operations. Each binary expression consists of a left and right operand and a name is extracted for each of them. For each operand we also measured the frequency with which the operand corresponds to certain common types such as identifier, literal or a ThisExpression.
7 IS NEURAL BUG-FINDING USEFUL IN PRACTICE?
Although related work (Pradel & Sen, 2018; Allamanis et al., 2018b; Vasic et al., 2019) has shown that there is great potential for embedding based neural bug finders, the evaluation has mostly focused on synthetic bugs introduced by mutating the original code. However, there is no strong indication that the synthetic bugs correlate to real ones, apart from a small study of the top 50 warnings for each bug type produced by DeepBugs. A good example is the mutation operation utilized for the incorrect binary operator bug. A lot of the introduced bug instances could result in syntactic errors. This can potentially create a classifier with a high bias towards correlating buggy code to syntactically incorrect code, thus hindering the model’s ability to generalize on real bugs. Ideally, in an industrial environment we would like the resulting models to achieve a false positive rate of less than 10 % (Sadowski et al., 2015). Sadly, high true positive rates are not to be expected as well since static bug detectors were shown to be able to detect less than 5% of bugs
(Habib & Pradel, 2018) contained in the Defects4J corpus (Just et al., 2014) and less than 12% in a single-statement bugs corpus (Karampatsis & Sutton, 2019). We note that in the second case the static analysis tool is given credit by reported any warning for the buggy line, so the actual percentage might lower than the reported one.
We next make a first step on investigating the practical usefulness of our methods by applying the classifiers of the previous section on a small corpus of real JavaScript bugs. However, we think that this is a very hard yet interesting problem that should be carefully examined in future work. In order to mine a corpus of real bug changes we used the methodology described in (Karampatsis & Sutton, 2019). We note that we adapted their implementation to utilize the Rhino JavaScript parser3. Their methodology extracts bug fixing commits and filters them to only keep those that contain small single-statement changes. Finally, it classifies each pair of modified statements by whether the fit a set of mutation patterns. The resulting dataset is shown in Table 6. Upon acceptance of the paper we will release this dataset along with our implementation, the rest of data used, and the learned representations.
Finally, we queried the DeepBugs and SCELMo with each buggy instance as well as its fixed variant and measured the percentage of correctly classified instances for each of the two categories. We also ignored any instances for which the JavaScript parser utilized for both failed to extract an AST. We classified as bugs any instances that were assigned a probability to be a bug > 75%. In an actual system this threshold should ideally be tuned on a validation set.
Table 7 suggests that there might indeed be some potential for future practical applications of neural bug finding techniques. Both are able to uncover some of the bugs. However, the results also suggest that careful tuning of the predictions threshold might be necessary, especially if we take into account the industrial need to comply with a low false positive rate (FPR). For instance, raising SCELMo’s prediction threshold to 80% for the swap arguments bug results in finding only 3.34% of the bugs but correctly classifying 100% of the repaired function calls, thus achieving 0.0% false positive rate. Moreover, since SCELMo could not uncover any of the real binary operator bugs, future work could investigate the effect of utilizing different mutation strategies for the purpose of artificial bug-induction. Future work could also investigate if fine-tuning on small set of real bugs could result in more robust classifiers.
8 CONCLUSION
We have presented SCELMo, which is to our knowledge the first language-model based contextual embeddings for source code. Contextual embeddings have many potential advantages for source code, because surrounding tokens can indirectly provide information about tokens, e.g. about likely values of variables. We highlight the utility of SCELMo embeddings by using them within a recent state-of-the-art machine learning based bug detector. The SCELMo embeddings yield a dramatic improvement in the synthetic bug detection performance benchmark, especially on lines of code that contain out-of-vocabulary tokens and complex expressions that can cause difficulty for the method. We also showed and discussed the performance of the resulting bug detectors on a dataset of real bugs raising useful insights for future work.
3https://github.com/mozilla/rhino
A NAME EXTRACTION HEURISTIC
In order for DeepBugs to operate it is necessary to extract identifiers or literals for each expression part of the statement. The bug detector for swapped arguments utilizes the following elements of the function call:
Base Object: The expression on which the function is called. Callee: The called function. Argument 1: The expression consisting the first argument of the called function. Argument 2: The expression consisting the first argument of the called function.
Similarly the bug detectors for incorrect binary operators and operands utilize the following elements of the binary expression:
Binary Operator: The binary operator utilized in the expression. Left Operand: The left operand of the binary expression. Right Operand: The right operand of the binary expression.
We next describe the extraction heuristic, which is shared by all the bug detectors. The heuristic takes as input a node n representing an expression and returns name(n) based on the following rules:
• Identifier: return its name. • Literal: return its value. • this expression: return this. • Update expression with argument x: return name(x). • Member expression accessing a property p: return name(p). • Member expression accessing a property base[p]: return name(base). • Call expression base.callee(...): return name(callee). • Property node n: If n.key does not exist return name(n.value). If name(n.key) does not exist return name(n.value) . Otherwise randomly return either name(n.value) or name()n.key).
• Binary expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Logical expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).
• Assignment expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise, randomly return either name(l) ir name(r).
• Unary expression with argument u : Return name(u). • Array expression with elements li : For all li that name(li) exists randomly choose one of them and return name(li).
• Conditional expression with operands c, l, and r: Randomly choose one out of c, l, r for which a name exists and return its name.
• Function expression: return function. • Object expression: return {. • New expression with a constructor function call c: return name(c).
All random decisions follow a uniform distribution. | 1. What is the focus of the paper, and what problem does it aim to solve?
2. What are the strengths of the proposed approach, particularly regarding its basis on ELMo?
3. What are the weaknesses of the paper, especially regarding the data used and the method's applicability to real-world bugs?
4. How does the reviewer assess the paper's technicality and contribution to opening up new research directions?
5. Are there any minor issues or suggestions for improvement in the paper, such as motivating the advantage of learning for bug detection or providing clearer examples? | Review | Review
The paper proposes an embedding method for source code tokens, which is based on contextual word representation, particularly is based on the method of ELMo. The learned representation is evaluated on the task of bug detection, with promising performance.
Strengths:
The paper addresses an important and impactful problem. The solution designed for this problem seems very reasonable. Experiments are useful and reasonable and the experimental results are promising and in the favor of the paper.
The paper is well written and clear.
Weaknesses:
- The data used (in particular the method of buggy code generation applied) seems very specific. It would be interesting to know the performance of the method on real bugs.
- The paper is a bit low in technicality.
Decision: Accept
I think this paper is overall a good work and can open direction of research even beyond the scope of the paper, for example in combining learning and reasoning, or in source code generation with adversarial models.
Minor:
- Since compilers can spot errors in code completely, it would be useful to motivate the advantage of learning for bug detection
- The table referrals in the body of the paper contains wrong table numbers in Sections 6.1, 6.2, 6.3.
- The incorrect Binary Operator example in Listing 2 does not seem to be a well justified bug. It could be a correct piece of code for a different purpose.
- which use -> which we use |
ICLR | Title
How to 0wn the NAS in Your Spare Time
Abstract
New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers topperforming architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems.
1 INTRODUCTION
To continue outperforming state-of-the-art results, research in deep learning (DL) has shifted from manually engineering features to engineering DL systems, including novel data pre-processing pipelines (Raff et al., 2018; Wang et al., 2019) and novel neural architectures (Cai et al., 2019; Zoph et al., 2018). For example, a recent malware detection system MalConv, with a manually designed pipeline that combines embeddings and convolutions, achieves 6% better detection rate over previous state-of-the-art technique without pre-processing (Raff et al., 2018). In addition to designing data pre-processing pipelines, other research efforts focus on neural architecture search (NAS)—a method to automatically generate novel architectures that are faster, more accurate and more compact. For instance, the recent work of ProxylessNAS (Cai et al., 2019) can generate a novel architecture with 10% less error rate and 5x fewer parameters than previous state-of-the-art generic architecture. As a result, in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge (Christian & Vanhoucke, 2017).
These novel DL systems are usually costly to obtain: generating the NASNet architectures (Zoph et al., 2018) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture. As a result, an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them. Compared to stealing a trained model (including all the weights), stealing the architectural
This work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center.
details that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks. Training new DL systems based on these stolen details still provides the benefits, even when the training data is different. After obtaining these details, an attacker can train a functioning model, even on a different data set, and still benefit from the stolen DL system (So et al., 2019; Wang et al., 2019). Further, against a novel system, stealing its architectural details increases the reliability of black-box poisoning and evasion attacks (Demontis et al., 2019). Moreover, stealing leads to threats such as Camouflage attacks (Xiao et al., 2019) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines.
The emerging Machine-Learning-as-a-Service (MLaaS) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems (Liu et al., 2015). Unlike prior stealing attacks, these attacks do not require physical proximity to the hardare that runs the system (Batina et al., 2019; Hua et al., 2018) or direct query access to train an approximate model (Tramèr et al., 2016). Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information, such as cryptographic keys (Liu et al., 2015). Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs (Werner et al., 2019).
In this paper, considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings, we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage. Simulating a common cloud computing scenario, our attacker has a co-located VM on the same host machine as the victim DL system, and shares the last-level cache with the victim (Liu et al., 2015). As a result, even though the VMs are running on separate processor cores, the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running (Liu et al., 2015).
The first step of our attack is launching a cache side-channel attack, Flush+Reload (Yarom & Falkner, 2014), to extract a single trace of victim’s function calls (Section 3). This trace corresponds to the execution of specific network operations a DL framework performs, e.g., convolutions or batch-normalizations, while processing an input sample. However, the trace has little information about the computational graph, e.g., the layers, branches or skip connections, or the architectural parameters, e.g., the number of filters in a convolutional layer. The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN (Yan et al., 2018; Duddu et al., 2018); therefore, these attacks are only able to extract variants of generic architectures, such as VGG (Simonyan & Zisserman, 2015) or ResNet (He et al., 2016). To overcome this challenge, we also extract the approximate time each DL operation takes, in addition to the trace, and we leverage this information to estimate the architectural parameters. This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters (Section 4). We apply our technique to two exemplar DL systems: the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS.
Contributions. We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information, that leaks DL computations, using Flush+Reload attack. We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario. Using the extracted information, our reconstruction algorithm estimates the computational graph and the architectural parameters.
We demonstrate that our attacker can reconstruct a novel network architecture found from NAS process (ProxylessNAS) and a novel manually designed data pre-processing pipeline (MalConv) with no reconstruction error.
We demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks, PyTorch and TensorFlow.
2 BACKGROUND
Here, we discuss prior efforts in both crafting and stealing network architectures. There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts. The
immense effort and computational costs of crafting them, however, motivates the adversaries to steal them.
Effort to Design Deep Learning Systems. Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience. Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively. For example, MalConv malware detection system (Raff et al., 2018) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole. Pseudo LIDAR (Wang et al., 2019), by pre-processing the output of a simple camera sensor into a LIDAR-like representation, achieves four times better object detection accuracy than previous state-of-the-art technique. Moreover, recent work also focuses on automatically generating optimal architectures via neural architecture search (NAS). For example, reinforcement learning (Zoph & Le, 2016) or gradient-based approaches (Cai et al., 2019) have been proposed for learning to generate optimal architectures. Even though NAS procedures have been shown to produce more accurate, more compact and faster neural networks, the computational cost of the search can be an order of magnitude higher than training a generic architecture (Zoph et al., 2018).
Effort to Steal Deep Learning Systems. Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim’s hardware. Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements (Hua et al., 2018) or electromagnetic emanations (Batina et al., 2019). These attacks are not applicable in the cloud setting we consider. The remote attacks that are applicable in the cloud setting, on the other hand, have limitation of requiring precise measurements that are impractical in the cloud (Duddu et al., 2018). Further, the attack without this limitation (Hong et al., 2018) requires the attacker to know the family the target architecture comes from; thus, it cannot steal novel architectures. In our work, we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting.
3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD
3.1 THREAT MODEL
We consider an attacker who aims to steal the key components in a novel DL system, i.e., a novel pre-processing pipeline or a novel network architecture. We first launch a Flush+Reload (Yarom & Falkner, 2014) attack to extract cache side-channel information leaked by DL computation. Our target setting is a cloud environment, where the victim’s DL system is deployed inside a VM—or a container—to serve the requests of external users. Flush+Reload, in this setting, is known to be a weak, and practical, side-channel attack (Liu et al., 2015). Further, as in MLaaS products in the cloud, the victim uses popular open-source DL frameworks, such as PyTorch (Benoit Steiner, 2019) or TensorFlow (Abadi et al., 2016).
Capabilities. We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim’s system. Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques (Ristenpart et al., 2009; Zhang et al., 2011; Bates et al., 2012; Kohno et al., 2005; Varadarajan et al., 2015). Due to the co-location, the last-level cache (L3 cache) in the physical host is shared between multiple cores where the attacker’s and victim’s processes are; thus, our attacker can monitor the victim’s computations leaked at the L3 cache. We also note that, even if the victim uses GPUs, our attacker can still observe the same computations used for CPUs via cache side-channels (see Appendix A).
Knowledge. We consider our attacker and the victim use the same version of the same open-source DL framework. This is realistic, in MLaaS scenarios such as AWS SageMaker or Google Cloud’s AutoML, as cloud providers recommend practitioners to use the common frameworks to construct their systems. These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique.
For example, AWS provides convenient deployment options for both PyTorch and TensorFlow: https://docs. aws.amazon.com/sagemaker/latest/dg/pytorch.html, and https://docs.aws.amazon.com/sagemaker/latest/dg/tf.html.
3.2 FLUSH+RELOAD MECHANISM
Flush+Reload allows an adversary to continually monitor victim’s instruction access patterns by observing the time taken to load them from memory. This technique is effective to extract the computation flow of the victim’s program when the attacker and victim share memory (i.e., a shared library or page deduplication (Bosman et al., 2016)). The attacker flushes specific lines of code in a shared DL framework from the co-located machine’s cache-hierarchy and then measure the amount of time it takes to reload the lines of code. If the victim invokes the monitored line of code, the instruction will be reloaded into the shared cache, and when the attacker reloads the instruction, the access to it will be noticably faster. On the other hand, if the victim does not call the monitored line of code, the access to it will be slower because the instruction needs to be loaded from main memory (DRAM). By repeating this process, our attacker can tell when a victim has accessed a line of code.
3.3 OVERVIEW OF OUR ATTACK PROCEDURE
In Figure 1, we illustrate our attack procedure. We split the steps into two phases: the online phase and the offline phase. In the online phase (step 2©), the attacker needs co-location to monitor the computations from the victim’s system. In the offline phase (steps 1©, 3©, 4©, and 5©) attacker does not require the co-location with the victim.
1© First, our attacker analyzes the open-source DL framework to identify the lines of code to monitor. The attacker monitors the first line of each function that corresponds to the start of a DL computation. 2© Next, the attacker spins up a co-located VM and launches the Flush+Reload attack to extract the trace of victim system’s function calls. As the trace does not depend on the input sample, we only require to extract a single trace from one full invocation of the victim system. 3© Since the raw observations with Flush+Reload are noisy, the attacker applies filtering to highlight the regularities of DL computations reflected in the trace. 4© To estimate the architectural parameters, e.g., the input/output channels, kernel size, or strides, our attacker creates a lookup tables of timings and performed number of matrix multiplications by collecting traces from various parameter combinations. 5© Finally, using the victim’s computational trace lookup tables for estimating architectural parameters, the attacker starts the reconstruction process to steal the victim’s DL system (Sec 4).
3.4 MONITORING THE TOY NETWORK COMPUTATIONS VIA FLUSH+RELOAD
Experimental Setup. We implement our attack on Ubuntu 18.04 running on a host machine equipped with the Intel E3-1245v6 3.7GHz processors (8 cores, 32GB memory and 8MB cache shared between cores). For the step 1©, we analyze two popular open-source DL frameworks, PyTorch and TensorFlow, and identify the list of functions to monitor (see Appendix C for the full list of functions). We leverage the Mastik toolkit (Yarom, 2016) to launch the Flush+Reload attack, and while a victim DL system is running on a VM, our attacker monitors the list of functions—step 2©. For the reconstruction process conducted in offline after the extraction, we use Python v3.6 to implement the procedure.
https://www.python.org
ToyNet Results. In Figure 2, we demonstrate the extracted trace via Flush+Reload while ToyNet is processing an input. ToyNet is composed of one convolution followed by a batch-norm and one depthwise convolution followed by a batch-norm and a ReLU activation. The 1st convolution has the parameters (in, out, kernel, stride) as (3, 10, 3, 1), and the depthwise convolution’s parameters are (10, 10, 1, 1). The network has a skip connection that adds the intermediate output (from the 1st convolution) to the final output. During inference, we feed in an input with dimensions 3x32x32.
In the middle panel of Figure 2, we also show the raw—noisy—trace from the Flush+Reload output. The trace only includes cache-hits where the attacker’s accesses to the lines of code are faster, i.e., when the victim invokes the function. Each element of the trace includes a timestamp and a function name. The name corresponds to the ToyNet layers, such as Conv2d and BatchNorm2d, and it also contains additional information such as the tensor (add) and the BLAS operations, e.g., GEMM(oncopy).
Our attacker filters the raw trace according to the regular patterns in the DL computation. For example, a long function call, e.g., Conv2d in the ToyNet trace, can appear multiple times in the trace, as the cache can hit multiple times during Flush+Reload. In this case, we condense the multiple occurrences into a single invocation using a heuristic based on how close the timestamps are. We also observe the matrix multiplications such as GEMM(conv) and GEMM(oncopy) while DL computation is being processed. We count the individual occurrences and sum them up them based on the timestamp. After obtaining the processed trace (in the right panel), the attacker starts the reconstruction procedure.
4 RECONSTRUCTING NOVEL DEEP LEARNING SYSTEMS
After processing the Flush+Reload trace, our attacker reconstructs the key components of the victim’s DL system. In this process, the attacker aims to generate the candidate computational graphs of the victim system and to eliminate the incompatible candidates by estimating the correct parameter set for each computation. For instance, in our ToyNet example, the attacker wants to identify the computational orders and the location of the start and end of a branch connection (computational graph). Also, the same attacker wants to estimate the parameters for each computation; for example, the input/output channels and the kernel size in the 1st Conv2d. In this small network that has one branch, there are only 10 candidate computational graphs; however, considering all possible combinations of parameters, this will result in untractable number of candidates. Prior work, in reconstruction, only considered generic architectures such as VGGs or ResNets with the unrealistic assumption that an attacker knows the architecture family (backbone); however, as our aim is to steal novel DL systems, we do not make this assumption. To overcome this problem, we design a reconstruction procedure, which we describe next.
Knowledge of Our Attacker in Reconstruction. Here, we consider our attacker knows what tensor operations and functions to monitor in the victim’s open-source DL framework. These functions are model-independent; they correspond to architectural attributes designated by the deep learning framework (see Appendix C). We show that this knowledge is sufficient to reconstruct novel datapreprocessing pipelines, such as MalConv, that are usually shallower than the network architectures.
To reconstruct the deeper network architecture automatically designed by NAS algorithms, we assume our attacker has some knowledge about the NAS search space—e.g., NASNet search space (Zoph et al., 2018)—the victim’s search process relies on. This knowledge includes the list of layers used and the fact that a set of layers (known as blocks) are repeatedly used such as Normal and Reduction Blocks in NASNet. We make this assumption because, from the sequence of computations observed via Flush+Reload, our attacker can easily identify a set of layers and the repetitions of the layers. However, we do mot assume how each block is composed by using the layer observations directly; instead, we identify candidate blocks by using a sequence mining algorithm. We demonstrate that, under these assumptions, our attack reconstructs the ProxylessNAS-CPU in 12 CPU hours rather than running a NAS algorithm from scratch that takes 40k GPU hours.
4.1 OVERVIEW OF OUR RECONSTRUCTION PROCEDURE.
We first focus on the invariant rules in the computations used for DL computations. For instance, there are unary operations and binary operations. The tensor addition used to implement a skip connection is binary operation; thus, our attacker can supplement the reconstruction process by pruning the incompatible candidates. We also exploit the fact that computation time is proportional to the number of element-wise multiplications in a computation. In the ToyNet example, the time the 1st convolution (2 million cycles) takes is shorter than the 2nd depthwise convolution (3.668 million cycles); thus, our attacker further eliminates the candidates by comparing the possible parameters for a computation with her offline profiling data—the lookup table.
Our reconstruction procedures consist of two steps:
1© Generation: The attacker can generate the candidate computational graphs from the Flush+Reload trace based on the invariant rules in DL computations. Using the rules, our attacker reduces the number of candidates significantly. 2© Elimination: Our attacker compares the time for each computation takes with profiling data and prunes the incompatible candidates. We estimate the parameters sequentially starting from the input. When the output dimension from a candidate does not match with the observation, we eliminate.
Error Metrics. To quantify the error of our reconstruction result, we use two similarity metrics. First, we use the graph edit distance (GED) (Abu-Aisheh et al., 2015) to compare the reconstructed computational graph with that of the victim. Second, we use the `1-distance to compute the error between the estimated architectural parameters and those in the victim system.
Victims. We first reconstruct the MalConv (Raff et al., 2018), a novel data pre-processing pipeline that converts the binary file into a specific format so that a neural network can digest easily. Also, we show that our attacker can reconstruct the novel ProxylessNAS (Cai et al., 2019) architecture that shows the improved accuracy on the ImageNet classification with the less computational cost on a CPU.
4.2 RECONSTRUCTING NOVEL PRE-PROCESSING PIPELINES
Here, we elaborate the reconstruction process of the MalConv (Raff et al., 2018) data pre-processing pipeline. MalConv receives the raw bytes of .exe files and determines whether the file is malicious or not. The uniqueness of MalConv comes from the way that it treats the sequence of bytes: 1) Code instructions in binary file are correlated spatially, but the correlation has discontinuities from function calls and jump commands that are difficult to capture by the sequence models, e.g., RNNs.
Note that ProxylessNAS starts its searching process from a backbone architecture such as NASNet; thus, even if the paper reported a search took 200 GPU hours, this number does not include the time spent searching a backbone architecture, i.e., the 40k GPU hours to find NASNet.
2) Also, each sequence has the order of two million steps which far exceedes the length of an input to any previous neural network classifier. MalConv tackles this problem by pre-processing the sequence of bytes (Figure 3). It first splits the upper four bits and the lower four bits (narrow operations) of a byte information; this helps the network capture the locality of closer bytes and distant bytes. Next, the pipeline uses one dimensional convolution to extract such localities and performs the element-wise multiplications of two outputs. Before feeding this information to the neural network, the pipeline uses max-pooling to reduce the training time caused by processing inputs with large dimensions. All these heuristics are examined manually (see Section 4 of the original paper); thus, our attacker can save time and effort by stealing the pipeline.
Generate Computational Graphs. The first step of our attacker is to reconstruct the computational graph candidates for the victim pipeline from the Flush+Reload trace. As we can see in the trace in Figure 3, the attacker cannot simply connect the components in the traces sequentially because the branch connection, e.g., [7] * (multiply). Also, from this component, our attacker knows which is the end of a branch but cannot know when the branch has started. We solve this problem by populating all possible candidates and pruning them later with the parameter estimation.
Our algorithm populates the candidate computational graphs and the sample candidates found (see Appendix E). Our solution uses a recursive algorithm. Given a trace from Flush+Reload (T ), we pop each computation t from the back and construct the list of candidates l. At a high-level, the algorithm first traverses all the possible connections starting from the last computation to the first by using recursion. Then, when the base condition is met (i.e., the algorithm arrives the first computation, Embeddings), we backtrack the recursions to construct the list of candidate computational graphs. We focus on the computation type in this backtracking process; there are unary and binary computations. For the unary operations, we simply connect the current and preceding computations. However, for the binary operations, we split all the preceding computations into a set of two lists. Each set of two lists corresponds to a branch, and we continue backtracking for each branch and include all of the construction into our results. At the end, we found 20 candidates.
Eliminate Candidates with Computational Parameters. Next, our attacker further prunes the candidates based on the computational parameter estimation process. Our attacker, most importantly, focuses on the fact that computation time is dependent on the size of the matrix multiplication. This enables our attacker to profile the computational time taken for a set of param-
MalConv Novel Pre-processing Pipeline Processed Trace
eter combinations in advance. The attacker is able to perform this offline by taking advantage of cloud infrastructure: that the hardware and software stacks composing the cloud are consistent. In the MalConv reconstruction, we profile the timing of the convolution and linear operations. For the convolutions, we consider input/output channels {1, 2, 4, 8, 16, 32, 128, 256}, kernels {1, 3, 5, 7, 11, 100, 200, 500, 1k, 10k}, and strides {1, 2, 5, 10, 100, 200, 500, 1k, 10k}. For the linear layers, we use input {4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048} and output dimensions {1, 10, 16, 20, 32, 40, 100, 128, 256, 512, 1k, 1024, 2048}. Once our attacker has the timing profiles with these parameter combinations, the attacker defines the potential parameter sets for the convolutions and linear layers. Then, the attacker checks, in each candidate, if the computational graph returns the correct output dimension (1,) for the input (8, 2000000). In this pruning process, there are the other operations such as Sigmoid, * (multipy), transponse, narrow, or pooling. We applied the universal rules for each case: 1) the Sigmoid and multiply do not change the input/output dimensions, 2) the transpose only swaps two dimensions in an input, 3) the narrow slice one chosen dimension, e.g., (8,2000000) to (4,1000000); thus we consider all the possible slice in checking, and 4) the pooling only requires us to estimate its window size, so we match this value to the stride of a preceding convolution. At the end of this parameter estimation, we can narrow down to only one architecture with the correct set of computational parameters, i.e., 0% error.
4.3 RECONSTRUCTING NOVEL NETWORK ARCHITECTURES
Here, we show our attacker is able to steal a novel network architecture by describing the reconstruction process of the ProxylessNAS-CPU (Cai et al., 2019) that improves the accuracy of existing architecture, MobileNetV2 (Sandler et al., 2018), and also reduces the computation time. Indeed, the NAS search procedure warm-starts from an over-parameterized MobileNetV2 as a backbone; however, in our attack, we hypothesize our attacker is not aware of the backbone. Instead, we assume our attacker only knows the search space of MNasNet (Tan et al., 2019) (see Appendix D) where the authors come up with the MobileNetV2, opposed to the recent attacks in Sec 2.
Knowing the search space does not, however, reduce the amount of efforts by our attacker in reconstruction. The network architectures found from the NAS procedure commonly are wide and deep, and they include multiple branch connections; thus, our attacker requires to consider exponential number of candidate computational graphs and the computation parameters, which makes the attack infeasible. To tackle this issue, we focus on the NAS procedure—this process factorizes the entire architecture into blocks by their functions. For instance, NASNet (Zoph et al., 2018) is composed of normal cells (blocks) and reduction cells. Within each block, the process considers the architecture combinations that provides the optimal performance. Thus, we first identify the potential blocks before we initiate the process for reconstructing candidate computational graphs.
Identifying Candidate Blocks. We utilize the frequent subsequence mining (FSM) method to identify the blocks composing the ProxylessNAS-CPU architecture. Our FSM method is simple: we iterate over the Flush+Reload trace with the fixed windows and count the occurrences of each subsequence. Since the attacker knows that in the search space that the victim uses, a maximum of nine computations are used to compose a block, we consider the window size from one to nine. Once we count the number of occurrences for each subsequence (candidate blocks), and we prune
https://aws.amazon.com/ec2/instance-types/
them based on the rules in the search space: 1) a Conv2d operation is followed by a BatchNorm, 2) a block with a DepthConv2d must end with a Conv2d and BatchNorm (for a depthwise separable convolution), 3) a branch connection cannot merge (add) in the middle of the block, and 4) we take the most frequent block in each window. In Table 2, we describe the 9 identified blocks. We then run the generation process of reconstructing candidate computational graphs with the blocks instead of using each computation in the trace. At the end, we have 180,224 candidate computational graphs.
Eliminate Candidate with Computational Parameters. For each candidate composed of known blocks, our attacker estimates the computation parameters. However, the number of parameter combinations are also exponential; for example, within the search space, a Conv2d can have any number of input/output channels, kernel size {1, 3, 5}, and strides {1, 2}. Thus, we focus on the computation rules in a block. 1) We first found that DepthConv2d is only possible to have the same input/output channels. Also, the channel size can be identified by the number of GEMM(conv) operations. For instance, in Figure 4, the DepthConv2d has 143 GEMM(conv) invocations, which is close to the channel size. Since commonly the operation has an even number of channels, the attacker can easily reduce the candidates to 142 or 144. 2) We also know that the number of GEMM(oncopy) invocations is proportional to the matrix multiplication size in a Conv2d; thus, the attacker can compare the offline profiling results with the processed traces and estimate the parameters. For instance, the 1st Conv2d has 20 GEMM(oncopy), and we approximately have a set of input dimensions, e.g., (20- 30,112,112) from the previous block estimation. Thus, our attacker only profiles the variations of input channels {20´30}, kernels {1, 3, 5}, and strides {1, 2}—total 60 cases and check if there is a match. Moreover, 3) the Conv2d after DepthConv2d is the pointwise linear operation whose kernel and stride is one, which will further reduce the attacker’s efforts. Our attacker runs this elimination process and finally narrows down to only one architecture with the correct set of computational parameters, i.e., 0% error.
5 DISCUSSION
In this section, we discuss defense mechanisms that prevent our attacker from reconstructing the victim’s DL system with an exact match. Prior work on defenses against cache side-channel attacks proposed system-level solutions (Kim et al., 2012; Liu et al., 2016; Werner et al., 2019). However, applying them requires infrastructure-wide changes from cloud providers. Also, even if the infrastructure is resilient to cache side-channel attacks, an attacker can leverage other attack vectors to leak similar information. Thus, we focus on the defenses that can be implemented in DL frameworks.
We design our defense mechanisms to obfuscate what the attacker observes via cache side-channels by increasing the noise in computations supported by DL frameworks. We discuss four approaches that blend noise into components of a DL framework; however, these mechanisms introduce a computational overhead by performing additional operations. This highlights that defending against our attack is not trivial and efficient countermeasures require further research.
Padding Zeros to the Matrix Multiplication Operands. Our reconstruction algorithm estimates the computational parameters such as kernel sizes or strides based on the time taken for matrix multiplication. Hence, we consider increasing the size of operands randomly by padding zeros to
them. We keep the original sizes of the operands and, after the multiplication of augmented tensors, we convert the resulting tensor into that of the correct dimensions by removing the extra elements. With the augmentation, our attacker finds it difficult to reconstruct the victim’s DL system exactly by monitoring a single query. However, if our attacker can observe computations with multiple queries, the attacker can cancel-out the noise and estimate the parameters correctly.
Adding Null/Useless Network Operations. This reconstruction attack assumes all the computations observed in the Flush+Reload trace are used to compute the output of a DL system. Thus, a defender can modify the victim’s architecture so that it includes the identity layers or the branches whose outputs are not used. We hypothesize a small number of null/useless operations will not increase the attacker’s computational burden significantly; this addition only increases the time needed to reconstruct the victim’s architecture by a few hours. If the defender includes an excessive amount of null/useless layers or branches, this can significantly increase the reconstruction time. However, this defense suffers from two issues: 1) the defense may still not make the reconstruction impossible, and 2) the victim also requires to perform the additional operations, which increases network evaluation time significantly.
Shuffling the Computation Order. We have seen in popular DL frameworks that, once a network architecture is defined, the computational order of performing operations is also invariant. We are able to shuffle the computation order of the victim’s DL system each time when the system processes an input. In particular, we can identify the dependency of operations in a victim’s DL system and compute the independent operations in a different order each time. This approach will make the observations from cache side-channels inconsistent, which results in the exponential number of candidate architectures that our attacker needs to consider. However, to compute the independent operations separately, the defender needs to store intermediate results in memory while processing an input; thus, this approach increases the space overhead of the DL computations.
Running Decoy Operations in Parallel. Lastly, we can make a DL framework run separate networks (decoy operations) in parallel on the same physical host. These networks obfuscate what our attacker will observe via Flush+Reload. Here, the attacker cannot reconstruct the victim architecture by monitoring a single query because the computational order does not reflect how the victim’s architecture is defined. However, if our attacker can observe the computations over multiple queries, the attacker can use the frequent sequence mining (FSM) that we used in the block identification to identify a repeated set of operations and can reconstruct the victim architecture. This defense also increases network evaluation time by running extra operations on the same machine.
6 CONCLUSIONS AND FUTURE WORK
This work presents an attack that reconstructs a victim’s novel DL system through the information leakage from a cache side-channel, Flush+Reload. We steal key components of the victim’s system: a novel pre-processing pipeline and a novel network architecture. Observing the DL computations and the time to complete each computation enables the attacker to populate all candidate computational graphs and prune them with our parameter estimation process. In experiments, we demonstrate the feasibility of this reconstruction attack by reconstructing MalConv, a novel pre-processing pipeline for malicious file detection, and ProxylessNAS-CPU, a novel architecture for the ImageNet classification optimized to run on CPUs. We do this with 0% error. As novel DL systems become trade secrets, our results highlight the demands for future work on countermeasures against model theft.
ACKNOWLEDGMENTS
We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the Department of Defense, by NSF grants #CNS-1933033, #CNS-1840893, #CNS1453045 (CAREER), by a research partnership award from Cisco and by financial assistance award 70NANB15H328 from the U.S. Department of Commerce, National Institute of Standards and Technology. We would like to thank the NSF REU-CAAR program (NSF grant #CCF-1560193).
A APPLICABILITY TO GPUS
Our attack is not fundamentally different for GPUs. In most deep learning frameworks, when a network performs a computation, it invokes the same function implemented in C++ and the function decides whether the back-end computation can use GPUs or not. This practice maximizes the hardware compatibility of a framework; however, this also makes the framework vulnerable to our attacker who can still observe the common functions listed in Table 3 by monitoring the shared cache. On GPUs the timings would be different, so we would have to profile the computational times, e.g., the time taken for the matrix multiplication with various sizes of tensor operands. However, on both CPUs and GPUs, the computation time is proportional to the size of tensor operands, which enables our attacker to estimate the architecture parameters with timing observations.
B DL COMPUTATIONS MONITORED IN PYTORCH AND TENSORFLOW
Figure 5 describes the reconstruction process of a small network in both the PyTorch and TensorFlow frameworks. On the left, we have the ground truth of the ToyNet architecture, which represents an example of a possible residual block in a victim network. In the middle and right, we show the observations of an adversary monitoring both PyTorch and TensorFlow code. The first entry indicates the monitored function corresponding to the desired architectural attribute. The second entry indicates the timestamp at which the adversary observes these functions, and the last entry is the number of general matrix multiplication (GEMM) function calls for the given layer observation.
Naming conventions vary slightly between the two frameworks, but the information inferred is the same. The adversary attacking both networks sees functions calls that correspond to architectural attributes in the same order: Conv2d, BatchNorm2d, Conv2d/DepthwiseConv, BatchNorm2d, ReLU6, and TensorAdd. PyTorch does not distinguish between Conv2d and DepthwiseConv, but as stated in 4.1, we can differentiate the layers by timing data. Additionally, PyTorch and TensorFlow use different linear algebra libraries to perform matrix computation, so the implementations differ slightly. However, they both use variations on matrix multiplication algorithms that take into account system level optimizations, such as cache size (e.g. Goto’s algorithm). In both cases, we observe operations in nested iterations of these implementations and are able to monitor instructions that correspond to the size of the matrices being multiplied, giving an adversary the ability to estimate the parameters of the convolution layers.
To perform the estimations of these layer parameters, the adversary can profile candidates offline on similar hardware. They can then create a data set of candidate parameters for given observation ranges. For instance, the number of observed GEMM calls in the PyTorch example for the depthwise convolution layer gives the attacker the information that there are 10 output channels, and therefore also ten output channels in the 1st convolution. Additionally, the observed GEMM calls for the 1st convolution layer give the candidate kernel sizes of 3 and 5. Likewise in TensorFlow, the observed instructions fits the candidate kernel sizes of 3 or 5, and 0-24 output channels. Therefore, these
Customization of operations in TensorFlow: https://www.tensorflow.org/guide/create_op
exploitable vulnerabilities exist independent of the specific deep learning framework a victim is using.
C LIST OF FUNCTIONS MONITORED VIA FLUSH+RELOAD
Table 3 shows the exact lines of code we monitor in the PyTorch and TensorFlow frameworks. We use PyTorch v1.2.0 and Tensorflow v1.14.0. In both the frameworks, we are able to monitor a similar set of DL computations in the C++ native implementations. However, the back-end libraries supporting the matrix multiplications are different, i.e., PyTorch is compiled with OpenBLAS whereas TensorFlow uses Eigen and MKL-DNN. Even if the libraries are different, the multiplications are implemented using GOTO’s algorithm (Goto & Geijn, 2008). Therefore, we monitor the number of iterations of for-loops to estimate the overall size of a matrix multiplication.
D MNASNET SEARCH SPACE
Tan et al. (2019) utilize a hierarchical search space over six parameters: ConvOp, KernelSize, SERatio, SkipOp, FilterSize, and #Layers. They choose to partition a CNN into a known, finite set of blocks and then further divide these blocks into possibly repeated layers. The number of repeats per layer in a given block i is a searchable parameter Ni, which is bounded at ˘1 the number of layers in MobileNetV2 on which block i is based. These layers are further divided into three possible
https://github.com/pytorch/pytorch/commit/8554416a199c4cec01c60c7015d8301d2bb39b64
https://github.com/tensorflow/tensorflow/commit/87989f69597d6b2d60de8f112e1e3cea23be7298
network layers (ConvOp): regular convolution, depthwise convolution, or mobile inverted bottleneck convolution. Additionally, the network layer parameters can vary. These parameters include the convolution kernel size (KernelSize), the squeeze-and-excitation ratio (SERatio), a possible skip op (SkipOp), and the output filter size (FilterSize). The squeeze-and-excitation ration (SERatio) of a given layer varies between 0 and 0.025; the convolution kernel size varies between 3 and 5; the skip op is either pooling, identity residual, or no skip; and the filter size varies between 0.75, 1.0, and 1.25 the filter size of the corresponding block in MobileNetV2. Overall, this gives an claimed typical search space size of 1013 possibilities with 5 blocks, 3 average layers per block, and 432 options for the sub search space of a each block. This size compares to the per-layer approach with the same parameters that has a search space size 1039.
E SEARCHING CANDIDATE COMPUTATIONAL GRAPHS | 1. How effective and efficient is the proposed method in reconstructing a victim's neural architecture, particularly compared to performing NAS directly?
2. What are the limitations of the proposed approach in terms of applicability to various network structures, such as sequence networks or graph convolutional networks?
3. Under what conditions can the proposed method successfully reconstruct a target neural architecture with zero error, and what factors might affect its ability to find an exact match? | Review | Review
This paper proposes a way to attack and reconstruct a victim's neural architecture that is co-located on the same host. They do it through cache side-channel leakage and use Flush+Reload to extract the trace of victim's function call, which tells specific network operations. To recover the computational graph, they use the approximate time each operation takes to prune out any incompatible candidate computation graph. They show that they can reconstruct exactly the MalConv and ProxylessNAS.
The paper looks very interesting but also alarming -- more research should be done to countermeasure this attack. I have the following questions:
1. To reconstruct the network, you need to generate potentially exponentially number of candidates and do some pruning based on the estimated parameters. This also looks very expensive. I am wondering compared to just doing NAS yourself, how much gain in terms of resources and time this attack can give?
2. What is the limitation of the proposed approach, i.e., does it work on any network structures, e.g., sequence networks, graph convolutional networks, etc.
3. In the experiments shown, you can reconstruct MalConv and ProxylessNAS with zero error, does the proposed approach alway find the exact match? Under what circumstances can you find the exact match? |
ICLR | Title
How to 0wn the NAS in Your Spare Time
Abstract
New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers topperforming architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems.
1 INTRODUCTION
To continue outperforming state-of-the-art results, research in deep learning (DL) has shifted from manually engineering features to engineering DL systems, including novel data pre-processing pipelines (Raff et al., 2018; Wang et al., 2019) and novel neural architectures (Cai et al., 2019; Zoph et al., 2018). For example, a recent malware detection system MalConv, with a manually designed pipeline that combines embeddings and convolutions, achieves 6% better detection rate over previous state-of-the-art technique without pre-processing (Raff et al., 2018). In addition to designing data pre-processing pipelines, other research efforts focus on neural architecture search (NAS)—a method to automatically generate novel architectures that are faster, more accurate and more compact. For instance, the recent work of ProxylessNAS (Cai et al., 2019) can generate a novel architecture with 10% less error rate and 5x fewer parameters than previous state-of-the-art generic architecture. As a result, in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge (Christian & Vanhoucke, 2017).
These novel DL systems are usually costly to obtain: generating the NASNet architectures (Zoph et al., 2018) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture. As a result, an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them. Compared to stealing a trained model (including all the weights), stealing the architectural
This work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center.
details that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks. Training new DL systems based on these stolen details still provides the benefits, even when the training data is different. After obtaining these details, an attacker can train a functioning model, even on a different data set, and still benefit from the stolen DL system (So et al., 2019; Wang et al., 2019). Further, against a novel system, stealing its architectural details increases the reliability of black-box poisoning and evasion attacks (Demontis et al., 2019). Moreover, stealing leads to threats such as Camouflage attacks (Xiao et al., 2019) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines.
The emerging Machine-Learning-as-a-Service (MLaaS) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems (Liu et al., 2015). Unlike prior stealing attacks, these attacks do not require physical proximity to the hardare that runs the system (Batina et al., 2019; Hua et al., 2018) or direct query access to train an approximate model (Tramèr et al., 2016). Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information, such as cryptographic keys (Liu et al., 2015). Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs (Werner et al., 2019).
In this paper, considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings, we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage. Simulating a common cloud computing scenario, our attacker has a co-located VM on the same host machine as the victim DL system, and shares the last-level cache with the victim (Liu et al., 2015). As a result, even though the VMs are running on separate processor cores, the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running (Liu et al., 2015).
The first step of our attack is launching a cache side-channel attack, Flush+Reload (Yarom & Falkner, 2014), to extract a single trace of victim’s function calls (Section 3). This trace corresponds to the execution of specific network operations a DL framework performs, e.g., convolutions or batch-normalizations, while processing an input sample. However, the trace has little information about the computational graph, e.g., the layers, branches or skip connections, or the architectural parameters, e.g., the number of filters in a convolutional layer. The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN (Yan et al., 2018; Duddu et al., 2018); therefore, these attacks are only able to extract variants of generic architectures, such as VGG (Simonyan & Zisserman, 2015) or ResNet (He et al., 2016). To overcome this challenge, we also extract the approximate time each DL operation takes, in addition to the trace, and we leverage this information to estimate the architectural parameters. This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters (Section 4). We apply our technique to two exemplar DL systems: the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS.
Contributions. We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information, that leaks DL computations, using Flush+Reload attack. We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario. Using the extracted information, our reconstruction algorithm estimates the computational graph and the architectural parameters.
We demonstrate that our attacker can reconstruct a novel network architecture found from NAS process (ProxylessNAS) and a novel manually designed data pre-processing pipeline (MalConv) with no reconstruction error.
We demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks, PyTorch and TensorFlow.
2 BACKGROUND
Here, we discuss prior efforts in both crafting and stealing network architectures. There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts. The
immense effort and computational costs of crafting them, however, motivates the adversaries to steal them.
Effort to Design Deep Learning Systems. Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience. Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively. For example, MalConv malware detection system (Raff et al., 2018) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole. Pseudo LIDAR (Wang et al., 2019), by pre-processing the output of a simple camera sensor into a LIDAR-like representation, achieves four times better object detection accuracy than previous state-of-the-art technique. Moreover, recent work also focuses on automatically generating optimal architectures via neural architecture search (NAS). For example, reinforcement learning (Zoph & Le, 2016) or gradient-based approaches (Cai et al., 2019) have been proposed for learning to generate optimal architectures. Even though NAS procedures have been shown to produce more accurate, more compact and faster neural networks, the computational cost of the search can be an order of magnitude higher than training a generic architecture (Zoph et al., 2018).
Effort to Steal Deep Learning Systems. Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim’s hardware. Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements (Hua et al., 2018) or electromagnetic emanations (Batina et al., 2019). These attacks are not applicable in the cloud setting we consider. The remote attacks that are applicable in the cloud setting, on the other hand, have limitation of requiring precise measurements that are impractical in the cloud (Duddu et al., 2018). Further, the attack without this limitation (Hong et al., 2018) requires the attacker to know the family the target architecture comes from; thus, it cannot steal novel architectures. In our work, we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting.
3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD
3.1 THREAT MODEL
We consider an attacker who aims to steal the key components in a novel DL system, i.e., a novel pre-processing pipeline or a novel network architecture. We first launch a Flush+Reload (Yarom & Falkner, 2014) attack to extract cache side-channel information leaked by DL computation. Our target setting is a cloud environment, where the victim’s DL system is deployed inside a VM—or a container—to serve the requests of external users. Flush+Reload, in this setting, is known to be a weak, and practical, side-channel attack (Liu et al., 2015). Further, as in MLaaS products in the cloud, the victim uses popular open-source DL frameworks, such as PyTorch (Benoit Steiner, 2019) or TensorFlow (Abadi et al., 2016).
Capabilities. We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim’s system. Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques (Ristenpart et al., 2009; Zhang et al., 2011; Bates et al., 2012; Kohno et al., 2005; Varadarajan et al., 2015). Due to the co-location, the last-level cache (L3 cache) in the physical host is shared between multiple cores where the attacker’s and victim’s processes are; thus, our attacker can monitor the victim’s computations leaked at the L3 cache. We also note that, even if the victim uses GPUs, our attacker can still observe the same computations used for CPUs via cache side-channels (see Appendix A).
Knowledge. We consider our attacker and the victim use the same version of the same open-source DL framework. This is realistic, in MLaaS scenarios such as AWS SageMaker or Google Cloud’s AutoML, as cloud providers recommend practitioners to use the common frameworks to construct their systems. These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique.
For example, AWS provides convenient deployment options for both PyTorch and TensorFlow: https://docs. aws.amazon.com/sagemaker/latest/dg/pytorch.html, and https://docs.aws.amazon.com/sagemaker/latest/dg/tf.html.
3.2 FLUSH+RELOAD MECHANISM
Flush+Reload allows an adversary to continually monitor victim’s instruction access patterns by observing the time taken to load them from memory. This technique is effective to extract the computation flow of the victim’s program when the attacker and victim share memory (i.e., a shared library or page deduplication (Bosman et al., 2016)). The attacker flushes specific lines of code in a shared DL framework from the co-located machine’s cache-hierarchy and then measure the amount of time it takes to reload the lines of code. If the victim invokes the monitored line of code, the instruction will be reloaded into the shared cache, and when the attacker reloads the instruction, the access to it will be noticably faster. On the other hand, if the victim does not call the monitored line of code, the access to it will be slower because the instruction needs to be loaded from main memory (DRAM). By repeating this process, our attacker can tell when a victim has accessed a line of code.
3.3 OVERVIEW OF OUR ATTACK PROCEDURE
In Figure 1, we illustrate our attack procedure. We split the steps into two phases: the online phase and the offline phase. In the online phase (step 2©), the attacker needs co-location to monitor the computations from the victim’s system. In the offline phase (steps 1©, 3©, 4©, and 5©) attacker does not require the co-location with the victim.
1© First, our attacker analyzes the open-source DL framework to identify the lines of code to monitor. The attacker monitors the first line of each function that corresponds to the start of a DL computation. 2© Next, the attacker spins up a co-located VM and launches the Flush+Reload attack to extract the trace of victim system’s function calls. As the trace does not depend on the input sample, we only require to extract a single trace from one full invocation of the victim system. 3© Since the raw observations with Flush+Reload are noisy, the attacker applies filtering to highlight the regularities of DL computations reflected in the trace. 4© To estimate the architectural parameters, e.g., the input/output channels, kernel size, or strides, our attacker creates a lookup tables of timings and performed number of matrix multiplications by collecting traces from various parameter combinations. 5© Finally, using the victim’s computational trace lookup tables for estimating architectural parameters, the attacker starts the reconstruction process to steal the victim’s DL system (Sec 4).
3.4 MONITORING THE TOY NETWORK COMPUTATIONS VIA FLUSH+RELOAD
Experimental Setup. We implement our attack on Ubuntu 18.04 running on a host machine equipped with the Intel E3-1245v6 3.7GHz processors (8 cores, 32GB memory and 8MB cache shared between cores). For the step 1©, we analyze two popular open-source DL frameworks, PyTorch and TensorFlow, and identify the list of functions to monitor (see Appendix C for the full list of functions). We leverage the Mastik toolkit (Yarom, 2016) to launch the Flush+Reload attack, and while a victim DL system is running on a VM, our attacker monitors the list of functions—step 2©. For the reconstruction process conducted in offline after the extraction, we use Python v3.6 to implement the procedure.
https://www.python.org
ToyNet Results. In Figure 2, we demonstrate the extracted trace via Flush+Reload while ToyNet is processing an input. ToyNet is composed of one convolution followed by a batch-norm and one depthwise convolution followed by a batch-norm and a ReLU activation. The 1st convolution has the parameters (in, out, kernel, stride) as (3, 10, 3, 1), and the depthwise convolution’s parameters are (10, 10, 1, 1). The network has a skip connection that adds the intermediate output (from the 1st convolution) to the final output. During inference, we feed in an input with dimensions 3x32x32.
In the middle panel of Figure 2, we also show the raw—noisy—trace from the Flush+Reload output. The trace only includes cache-hits where the attacker’s accesses to the lines of code are faster, i.e., when the victim invokes the function. Each element of the trace includes a timestamp and a function name. The name corresponds to the ToyNet layers, such as Conv2d and BatchNorm2d, and it also contains additional information such as the tensor (add) and the BLAS operations, e.g., GEMM(oncopy).
Our attacker filters the raw trace according to the regular patterns in the DL computation. For example, a long function call, e.g., Conv2d in the ToyNet trace, can appear multiple times in the trace, as the cache can hit multiple times during Flush+Reload. In this case, we condense the multiple occurrences into a single invocation using a heuristic based on how close the timestamps are. We also observe the matrix multiplications such as GEMM(conv) and GEMM(oncopy) while DL computation is being processed. We count the individual occurrences and sum them up them based on the timestamp. After obtaining the processed trace (in the right panel), the attacker starts the reconstruction procedure.
4 RECONSTRUCTING NOVEL DEEP LEARNING SYSTEMS
After processing the Flush+Reload trace, our attacker reconstructs the key components of the victim’s DL system. In this process, the attacker aims to generate the candidate computational graphs of the victim system and to eliminate the incompatible candidates by estimating the correct parameter set for each computation. For instance, in our ToyNet example, the attacker wants to identify the computational orders and the location of the start and end of a branch connection (computational graph). Also, the same attacker wants to estimate the parameters for each computation; for example, the input/output channels and the kernel size in the 1st Conv2d. In this small network that has one branch, there are only 10 candidate computational graphs; however, considering all possible combinations of parameters, this will result in untractable number of candidates. Prior work, in reconstruction, only considered generic architectures such as VGGs or ResNets with the unrealistic assumption that an attacker knows the architecture family (backbone); however, as our aim is to steal novel DL systems, we do not make this assumption. To overcome this problem, we design a reconstruction procedure, which we describe next.
Knowledge of Our Attacker in Reconstruction. Here, we consider our attacker knows what tensor operations and functions to monitor in the victim’s open-source DL framework. These functions are model-independent; they correspond to architectural attributes designated by the deep learning framework (see Appendix C). We show that this knowledge is sufficient to reconstruct novel datapreprocessing pipelines, such as MalConv, that are usually shallower than the network architectures.
To reconstruct the deeper network architecture automatically designed by NAS algorithms, we assume our attacker has some knowledge about the NAS search space—e.g., NASNet search space (Zoph et al., 2018)—the victim’s search process relies on. This knowledge includes the list of layers used and the fact that a set of layers (known as blocks) are repeatedly used such as Normal and Reduction Blocks in NASNet. We make this assumption because, from the sequence of computations observed via Flush+Reload, our attacker can easily identify a set of layers and the repetitions of the layers. However, we do mot assume how each block is composed by using the layer observations directly; instead, we identify candidate blocks by using a sequence mining algorithm. We demonstrate that, under these assumptions, our attack reconstructs the ProxylessNAS-CPU in 12 CPU hours rather than running a NAS algorithm from scratch that takes 40k GPU hours.
4.1 OVERVIEW OF OUR RECONSTRUCTION PROCEDURE.
We first focus on the invariant rules in the computations used for DL computations. For instance, there are unary operations and binary operations. The tensor addition used to implement a skip connection is binary operation; thus, our attacker can supplement the reconstruction process by pruning the incompatible candidates. We also exploit the fact that computation time is proportional to the number of element-wise multiplications in a computation. In the ToyNet example, the time the 1st convolution (2 million cycles) takes is shorter than the 2nd depthwise convolution (3.668 million cycles); thus, our attacker further eliminates the candidates by comparing the possible parameters for a computation with her offline profiling data—the lookup table.
Our reconstruction procedures consist of two steps:
1© Generation: The attacker can generate the candidate computational graphs from the Flush+Reload trace based on the invariant rules in DL computations. Using the rules, our attacker reduces the number of candidates significantly. 2© Elimination: Our attacker compares the time for each computation takes with profiling data and prunes the incompatible candidates. We estimate the parameters sequentially starting from the input. When the output dimension from a candidate does not match with the observation, we eliminate.
Error Metrics. To quantify the error of our reconstruction result, we use two similarity metrics. First, we use the graph edit distance (GED) (Abu-Aisheh et al., 2015) to compare the reconstructed computational graph with that of the victim. Second, we use the `1-distance to compute the error between the estimated architectural parameters and those in the victim system.
Victims. We first reconstruct the MalConv (Raff et al., 2018), a novel data pre-processing pipeline that converts the binary file into a specific format so that a neural network can digest easily. Also, we show that our attacker can reconstruct the novel ProxylessNAS (Cai et al., 2019) architecture that shows the improved accuracy on the ImageNet classification with the less computational cost on a CPU.
4.2 RECONSTRUCTING NOVEL PRE-PROCESSING PIPELINES
Here, we elaborate the reconstruction process of the MalConv (Raff et al., 2018) data pre-processing pipeline. MalConv receives the raw bytes of .exe files and determines whether the file is malicious or not. The uniqueness of MalConv comes from the way that it treats the sequence of bytes: 1) Code instructions in binary file are correlated spatially, but the correlation has discontinuities from function calls and jump commands that are difficult to capture by the sequence models, e.g., RNNs.
Note that ProxylessNAS starts its searching process from a backbone architecture such as NASNet; thus, even if the paper reported a search took 200 GPU hours, this number does not include the time spent searching a backbone architecture, i.e., the 40k GPU hours to find NASNet.
2) Also, each sequence has the order of two million steps which far exceedes the length of an input to any previous neural network classifier. MalConv tackles this problem by pre-processing the sequence of bytes (Figure 3). It first splits the upper four bits and the lower four bits (narrow operations) of a byte information; this helps the network capture the locality of closer bytes and distant bytes. Next, the pipeline uses one dimensional convolution to extract such localities and performs the element-wise multiplications of two outputs. Before feeding this information to the neural network, the pipeline uses max-pooling to reduce the training time caused by processing inputs with large dimensions. All these heuristics are examined manually (see Section 4 of the original paper); thus, our attacker can save time and effort by stealing the pipeline.
Generate Computational Graphs. The first step of our attacker is to reconstruct the computational graph candidates for the victim pipeline from the Flush+Reload trace. As we can see in the trace in Figure 3, the attacker cannot simply connect the components in the traces sequentially because the branch connection, e.g., [7] * (multiply). Also, from this component, our attacker knows which is the end of a branch but cannot know when the branch has started. We solve this problem by populating all possible candidates and pruning them later with the parameter estimation.
Our algorithm populates the candidate computational graphs and the sample candidates found (see Appendix E). Our solution uses a recursive algorithm. Given a trace from Flush+Reload (T ), we pop each computation t from the back and construct the list of candidates l. At a high-level, the algorithm first traverses all the possible connections starting from the last computation to the first by using recursion. Then, when the base condition is met (i.e., the algorithm arrives the first computation, Embeddings), we backtrack the recursions to construct the list of candidate computational graphs. We focus on the computation type in this backtracking process; there are unary and binary computations. For the unary operations, we simply connect the current and preceding computations. However, for the binary operations, we split all the preceding computations into a set of two lists. Each set of two lists corresponds to a branch, and we continue backtracking for each branch and include all of the construction into our results. At the end, we found 20 candidates.
Eliminate Candidates with Computational Parameters. Next, our attacker further prunes the candidates based on the computational parameter estimation process. Our attacker, most importantly, focuses on the fact that computation time is dependent on the size of the matrix multiplication. This enables our attacker to profile the computational time taken for a set of param-
MalConv Novel Pre-processing Pipeline Processed Trace
eter combinations in advance. The attacker is able to perform this offline by taking advantage of cloud infrastructure: that the hardware and software stacks composing the cloud are consistent. In the MalConv reconstruction, we profile the timing of the convolution and linear operations. For the convolutions, we consider input/output channels {1, 2, 4, 8, 16, 32, 128, 256}, kernels {1, 3, 5, 7, 11, 100, 200, 500, 1k, 10k}, and strides {1, 2, 5, 10, 100, 200, 500, 1k, 10k}. For the linear layers, we use input {4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048} and output dimensions {1, 10, 16, 20, 32, 40, 100, 128, 256, 512, 1k, 1024, 2048}. Once our attacker has the timing profiles with these parameter combinations, the attacker defines the potential parameter sets for the convolutions and linear layers. Then, the attacker checks, in each candidate, if the computational graph returns the correct output dimension (1,) for the input (8, 2000000). In this pruning process, there are the other operations such as Sigmoid, * (multipy), transponse, narrow, or pooling. We applied the universal rules for each case: 1) the Sigmoid and multiply do not change the input/output dimensions, 2) the transpose only swaps two dimensions in an input, 3) the narrow slice one chosen dimension, e.g., (8,2000000) to (4,1000000); thus we consider all the possible slice in checking, and 4) the pooling only requires us to estimate its window size, so we match this value to the stride of a preceding convolution. At the end of this parameter estimation, we can narrow down to only one architecture with the correct set of computational parameters, i.e., 0% error.
4.3 RECONSTRUCTING NOVEL NETWORK ARCHITECTURES
Here, we show our attacker is able to steal a novel network architecture by describing the reconstruction process of the ProxylessNAS-CPU (Cai et al., 2019) that improves the accuracy of existing architecture, MobileNetV2 (Sandler et al., 2018), and also reduces the computation time. Indeed, the NAS search procedure warm-starts from an over-parameterized MobileNetV2 as a backbone; however, in our attack, we hypothesize our attacker is not aware of the backbone. Instead, we assume our attacker only knows the search space of MNasNet (Tan et al., 2019) (see Appendix D) where the authors come up with the MobileNetV2, opposed to the recent attacks in Sec 2.
Knowing the search space does not, however, reduce the amount of efforts by our attacker in reconstruction. The network architectures found from the NAS procedure commonly are wide and deep, and they include multiple branch connections; thus, our attacker requires to consider exponential number of candidate computational graphs and the computation parameters, which makes the attack infeasible. To tackle this issue, we focus on the NAS procedure—this process factorizes the entire architecture into blocks by their functions. For instance, NASNet (Zoph et al., 2018) is composed of normal cells (blocks) and reduction cells. Within each block, the process considers the architecture combinations that provides the optimal performance. Thus, we first identify the potential blocks before we initiate the process for reconstructing candidate computational graphs.
Identifying Candidate Blocks. We utilize the frequent subsequence mining (FSM) method to identify the blocks composing the ProxylessNAS-CPU architecture. Our FSM method is simple: we iterate over the Flush+Reload trace with the fixed windows and count the occurrences of each subsequence. Since the attacker knows that in the search space that the victim uses, a maximum of nine computations are used to compose a block, we consider the window size from one to nine. Once we count the number of occurrences for each subsequence (candidate blocks), and we prune
https://aws.amazon.com/ec2/instance-types/
them based on the rules in the search space: 1) a Conv2d operation is followed by a BatchNorm, 2) a block with a DepthConv2d must end with a Conv2d and BatchNorm (for a depthwise separable convolution), 3) a branch connection cannot merge (add) in the middle of the block, and 4) we take the most frequent block in each window. In Table 2, we describe the 9 identified blocks. We then run the generation process of reconstructing candidate computational graphs with the blocks instead of using each computation in the trace. At the end, we have 180,224 candidate computational graphs.
Eliminate Candidate with Computational Parameters. For each candidate composed of known blocks, our attacker estimates the computation parameters. However, the number of parameter combinations are also exponential; for example, within the search space, a Conv2d can have any number of input/output channels, kernel size {1, 3, 5}, and strides {1, 2}. Thus, we focus on the computation rules in a block. 1) We first found that DepthConv2d is only possible to have the same input/output channels. Also, the channel size can be identified by the number of GEMM(conv) operations. For instance, in Figure 4, the DepthConv2d has 143 GEMM(conv) invocations, which is close to the channel size. Since commonly the operation has an even number of channels, the attacker can easily reduce the candidates to 142 or 144. 2) We also know that the number of GEMM(oncopy) invocations is proportional to the matrix multiplication size in a Conv2d; thus, the attacker can compare the offline profiling results with the processed traces and estimate the parameters. For instance, the 1st Conv2d has 20 GEMM(oncopy), and we approximately have a set of input dimensions, e.g., (20- 30,112,112) from the previous block estimation. Thus, our attacker only profiles the variations of input channels {20´30}, kernels {1, 3, 5}, and strides {1, 2}—total 60 cases and check if there is a match. Moreover, 3) the Conv2d after DepthConv2d is the pointwise linear operation whose kernel and stride is one, which will further reduce the attacker’s efforts. Our attacker runs this elimination process and finally narrows down to only one architecture with the correct set of computational parameters, i.e., 0% error.
5 DISCUSSION
In this section, we discuss defense mechanisms that prevent our attacker from reconstructing the victim’s DL system with an exact match. Prior work on defenses against cache side-channel attacks proposed system-level solutions (Kim et al., 2012; Liu et al., 2016; Werner et al., 2019). However, applying them requires infrastructure-wide changes from cloud providers. Also, even if the infrastructure is resilient to cache side-channel attacks, an attacker can leverage other attack vectors to leak similar information. Thus, we focus on the defenses that can be implemented in DL frameworks.
We design our defense mechanisms to obfuscate what the attacker observes via cache side-channels by increasing the noise in computations supported by DL frameworks. We discuss four approaches that blend noise into components of a DL framework; however, these mechanisms introduce a computational overhead by performing additional operations. This highlights that defending against our attack is not trivial and efficient countermeasures require further research.
Padding Zeros to the Matrix Multiplication Operands. Our reconstruction algorithm estimates the computational parameters such as kernel sizes or strides based on the time taken for matrix multiplication. Hence, we consider increasing the size of operands randomly by padding zeros to
them. We keep the original sizes of the operands and, after the multiplication of augmented tensors, we convert the resulting tensor into that of the correct dimensions by removing the extra elements. With the augmentation, our attacker finds it difficult to reconstruct the victim’s DL system exactly by monitoring a single query. However, if our attacker can observe computations with multiple queries, the attacker can cancel-out the noise and estimate the parameters correctly.
Adding Null/Useless Network Operations. This reconstruction attack assumes all the computations observed in the Flush+Reload trace are used to compute the output of a DL system. Thus, a defender can modify the victim’s architecture so that it includes the identity layers or the branches whose outputs are not used. We hypothesize a small number of null/useless operations will not increase the attacker’s computational burden significantly; this addition only increases the time needed to reconstruct the victim’s architecture by a few hours. If the defender includes an excessive amount of null/useless layers or branches, this can significantly increase the reconstruction time. However, this defense suffers from two issues: 1) the defense may still not make the reconstruction impossible, and 2) the victim also requires to perform the additional operations, which increases network evaluation time significantly.
Shuffling the Computation Order. We have seen in popular DL frameworks that, once a network architecture is defined, the computational order of performing operations is also invariant. We are able to shuffle the computation order of the victim’s DL system each time when the system processes an input. In particular, we can identify the dependency of operations in a victim’s DL system and compute the independent operations in a different order each time. This approach will make the observations from cache side-channels inconsistent, which results in the exponential number of candidate architectures that our attacker needs to consider. However, to compute the independent operations separately, the defender needs to store intermediate results in memory while processing an input; thus, this approach increases the space overhead of the DL computations.
Running Decoy Operations in Parallel. Lastly, we can make a DL framework run separate networks (decoy operations) in parallel on the same physical host. These networks obfuscate what our attacker will observe via Flush+Reload. Here, the attacker cannot reconstruct the victim architecture by monitoring a single query because the computational order does not reflect how the victim’s architecture is defined. However, if our attacker can observe the computations over multiple queries, the attacker can use the frequent sequence mining (FSM) that we used in the block identification to identify a repeated set of operations and can reconstruct the victim architecture. This defense also increases network evaluation time by running extra operations on the same machine.
6 CONCLUSIONS AND FUTURE WORK
This work presents an attack that reconstructs a victim’s novel DL system through the information leakage from a cache side-channel, Flush+Reload. We steal key components of the victim’s system: a novel pre-processing pipeline and a novel network architecture. Observing the DL computations and the time to complete each computation enables the attacker to populate all candidate computational graphs and prune them with our parameter estimation process. In experiments, we demonstrate the feasibility of this reconstruction attack by reconstructing MalConv, a novel pre-processing pipeline for malicious file detection, and ProxylessNAS-CPU, a novel architecture for the ImageNet classification optimized to run on CPUs. We do this with 0% error. As novel DL systems become trade secrets, our results highlight the demands for future work on countermeasures against model theft.
ACKNOWLEDGMENTS
We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the Department of Defense, by NSF grants #CNS-1933033, #CNS-1840893, #CNS1453045 (CAREER), by a research partnership award from Cisco and by financial assistance award 70NANB15H328 from the U.S. Department of Commerce, National Institute of Standards and Technology. We would like to thank the NSF REU-CAAR program (NSF grant #CCF-1560193).
A APPLICABILITY TO GPUS
Our attack is not fundamentally different for GPUs. In most deep learning frameworks, when a network performs a computation, it invokes the same function implemented in C++ and the function decides whether the back-end computation can use GPUs or not. This practice maximizes the hardware compatibility of a framework; however, this also makes the framework vulnerable to our attacker who can still observe the common functions listed in Table 3 by monitoring the shared cache. On GPUs the timings would be different, so we would have to profile the computational times, e.g., the time taken for the matrix multiplication with various sizes of tensor operands. However, on both CPUs and GPUs, the computation time is proportional to the size of tensor operands, which enables our attacker to estimate the architecture parameters with timing observations.
B DL COMPUTATIONS MONITORED IN PYTORCH AND TENSORFLOW
Figure 5 describes the reconstruction process of a small network in both the PyTorch and TensorFlow frameworks. On the left, we have the ground truth of the ToyNet architecture, which represents an example of a possible residual block in a victim network. In the middle and right, we show the observations of an adversary monitoring both PyTorch and TensorFlow code. The first entry indicates the monitored function corresponding to the desired architectural attribute. The second entry indicates the timestamp at which the adversary observes these functions, and the last entry is the number of general matrix multiplication (GEMM) function calls for the given layer observation.
Naming conventions vary slightly between the two frameworks, but the information inferred is the same. The adversary attacking both networks sees functions calls that correspond to architectural attributes in the same order: Conv2d, BatchNorm2d, Conv2d/DepthwiseConv, BatchNorm2d, ReLU6, and TensorAdd. PyTorch does not distinguish between Conv2d and DepthwiseConv, but as stated in 4.1, we can differentiate the layers by timing data. Additionally, PyTorch and TensorFlow use different linear algebra libraries to perform matrix computation, so the implementations differ slightly. However, they both use variations on matrix multiplication algorithms that take into account system level optimizations, such as cache size (e.g. Goto’s algorithm). In both cases, we observe operations in nested iterations of these implementations and are able to monitor instructions that correspond to the size of the matrices being multiplied, giving an adversary the ability to estimate the parameters of the convolution layers.
To perform the estimations of these layer parameters, the adversary can profile candidates offline on similar hardware. They can then create a data set of candidate parameters for given observation ranges. For instance, the number of observed GEMM calls in the PyTorch example for the depthwise convolution layer gives the attacker the information that there are 10 output channels, and therefore also ten output channels in the 1st convolution. Additionally, the observed GEMM calls for the 1st convolution layer give the candidate kernel sizes of 3 and 5. Likewise in TensorFlow, the observed instructions fits the candidate kernel sizes of 3 or 5, and 0-24 output channels. Therefore, these
Customization of operations in TensorFlow: https://www.tensorflow.org/guide/create_op
exploitable vulnerabilities exist independent of the specific deep learning framework a victim is using.
C LIST OF FUNCTIONS MONITORED VIA FLUSH+RELOAD
Table 3 shows the exact lines of code we monitor in the PyTorch and TensorFlow frameworks. We use PyTorch v1.2.0 and Tensorflow v1.14.0. In both the frameworks, we are able to monitor a similar set of DL computations in the C++ native implementations. However, the back-end libraries supporting the matrix multiplications are different, i.e., PyTorch is compiled with OpenBLAS whereas TensorFlow uses Eigen and MKL-DNN. Even if the libraries are different, the multiplications are implemented using GOTO’s algorithm (Goto & Geijn, 2008). Therefore, we monitor the number of iterations of for-loops to estimate the overall size of a matrix multiplication.
D MNASNET SEARCH SPACE
Tan et al. (2019) utilize a hierarchical search space over six parameters: ConvOp, KernelSize, SERatio, SkipOp, FilterSize, and #Layers. They choose to partition a CNN into a known, finite set of blocks and then further divide these blocks into possibly repeated layers. The number of repeats per layer in a given block i is a searchable parameter Ni, which is bounded at ˘1 the number of layers in MobileNetV2 on which block i is based. These layers are further divided into three possible
https://github.com/pytorch/pytorch/commit/8554416a199c4cec01c60c7015d8301d2bb39b64
https://github.com/tensorflow/tensorflow/commit/87989f69597d6b2d60de8f112e1e3cea23be7298
network layers (ConvOp): regular convolution, depthwise convolution, or mobile inverted bottleneck convolution. Additionally, the network layer parameters can vary. These parameters include the convolution kernel size (KernelSize), the squeeze-and-excitation ratio (SERatio), a possible skip op (SkipOp), and the output filter size (FilterSize). The squeeze-and-excitation ration (SERatio) of a given layer varies between 0 and 0.025; the convolution kernel size varies between 3 and 5; the skip op is either pooling, identity residual, or no skip; and the filter size varies between 0.75, 1.0, and 1.25 the filter size of the corresponding block in MobileNetV2. Overall, this gives an claimed typical search space size of 1013 possibilities with 5 blocks, 3 average layers per block, and 432 options for the sub search space of a each block. This size compares to the per-layer approach with the same parameters that has a search space size 1039.
E SEARCHING CANDIDATE COMPUTATIONAL GRAPHS | 1. What is the focus and contribution of the paper regarding machine learning pipeline and network architecture reconstruction?
2. What are the strengths of the proposed approach, particularly in terms of its ability to reconstruct complex pipelines and architectures?
3. What are the weaknesses of the paper, especially regarding the attacker's capabilities and potential defenses against the attack?
4. Do you have any questions or concerns about the choice of ProxylessNas-CPU for evaluation, and why it was chosen over other NAS architectures like Mnas and Enas? | Review | Review
This work proposed a method to reconstruct machine learning pipelines and network architectures using cache side-channel attack. It is based on a previous proposed method Flush+Reload that generates the raw trace of function calls. Then the authors applied several techniques to rebuild the computational graph from the raw traces. The proposed method is used to reconstruct MalConv which is a data pre-processing pipeline for malware detection and ProxyLessNas which is a network architecture obtained by NAS.
Overall, the paper is well-written and easy to read. The problem of stealing machine learning pipelines/architectures is interesting and important, since it enables an attacker to actually know the private networks that are being used for prediction. Therefore, I think this is a promising direction for future work.
I hope the authors can address my concerns as follows:
Q1: What is the knowledge of the attacker? The authors should be explicit in summarizing the detailed search space of the attacker. Currently i found it very hard to understand the capability of attacker. This is important in evaluating this work.
Q2: Can the authors add some discussion on how to defend against the proposed attack? For example, one can add some null/useless operation during execution to make the reconstruction process harder?
Q3: I am curious why the authors choose ProxylessNAS-CPU for evaluation. There is a bunch of other architectures found by NAS, e.g. MNas, ENas? |
ICLR | Title
How to 0wn the NAS in Your Spare Time
Abstract
New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers topperforming architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems.
1 INTRODUCTION
To continue outperforming state-of-the-art results, research in deep learning (DL) has shifted from manually engineering features to engineering DL systems, including novel data pre-processing pipelines (Raff et al., 2018; Wang et al., 2019) and novel neural architectures (Cai et al., 2019; Zoph et al., 2018). For example, a recent malware detection system MalConv, with a manually designed pipeline that combines embeddings and convolutions, achieves 6% better detection rate over previous state-of-the-art technique without pre-processing (Raff et al., 2018). In addition to designing data pre-processing pipelines, other research efforts focus on neural architecture search (NAS)—a method to automatically generate novel architectures that are faster, more accurate and more compact. For instance, the recent work of ProxylessNAS (Cai et al., 2019) can generate a novel architecture with 10% less error rate and 5x fewer parameters than previous state-of-the-art generic architecture. As a result, in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge (Christian & Vanhoucke, 2017).
These novel DL systems are usually costly to obtain: generating the NASNet architectures (Zoph et al., 2018) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture. As a result, an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them. Compared to stealing a trained model (including all the weights), stealing the architectural
This work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center.
details that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks. Training new DL systems based on these stolen details still provides the benefits, even when the training data is different. After obtaining these details, an attacker can train a functioning model, even on a different data set, and still benefit from the stolen DL system (So et al., 2019; Wang et al., 2019). Further, against a novel system, stealing its architectural details increases the reliability of black-box poisoning and evasion attacks (Demontis et al., 2019). Moreover, stealing leads to threats such as Camouflage attacks (Xiao et al., 2019) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines.
The emerging Machine-Learning-as-a-Service (MLaaS) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems (Liu et al., 2015). Unlike prior stealing attacks, these attacks do not require physical proximity to the hardare that runs the system (Batina et al., 2019; Hua et al., 2018) or direct query access to train an approximate model (Tramèr et al., 2016). Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information, such as cryptographic keys (Liu et al., 2015). Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs (Werner et al., 2019).
In this paper, considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings, we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage. Simulating a common cloud computing scenario, our attacker has a co-located VM on the same host machine as the victim DL system, and shares the last-level cache with the victim (Liu et al., 2015). As a result, even though the VMs are running on separate processor cores, the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running (Liu et al., 2015).
The first step of our attack is launching a cache side-channel attack, Flush+Reload (Yarom & Falkner, 2014), to extract a single trace of victim’s function calls (Section 3). This trace corresponds to the execution of specific network operations a DL framework performs, e.g., convolutions or batch-normalizations, while processing an input sample. However, the trace has little information about the computational graph, e.g., the layers, branches or skip connections, or the architectural parameters, e.g., the number of filters in a convolutional layer. The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN (Yan et al., 2018; Duddu et al., 2018); therefore, these attacks are only able to extract variants of generic architectures, such as VGG (Simonyan & Zisserman, 2015) or ResNet (He et al., 2016). To overcome this challenge, we also extract the approximate time each DL operation takes, in addition to the trace, and we leverage this information to estimate the architectural parameters. This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters (Section 4). We apply our technique to two exemplar DL systems: the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS.
Contributions. We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information, that leaks DL computations, using Flush+Reload attack. We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario. Using the extracted information, our reconstruction algorithm estimates the computational graph and the architectural parameters.
We demonstrate that our attacker can reconstruct a novel network architecture found from NAS process (ProxylessNAS) and a novel manually designed data pre-processing pipeline (MalConv) with no reconstruction error.
We demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks, PyTorch and TensorFlow.
2 BACKGROUND
Here, we discuss prior efforts in both crafting and stealing network architectures. There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts. The
immense effort and computational costs of crafting them, however, motivates the adversaries to steal them.
Effort to Design Deep Learning Systems. Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience. Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively. For example, MalConv malware detection system (Raff et al., 2018) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole. Pseudo LIDAR (Wang et al., 2019), by pre-processing the output of a simple camera sensor into a LIDAR-like representation, achieves four times better object detection accuracy than previous state-of-the-art technique. Moreover, recent work also focuses on automatically generating optimal architectures via neural architecture search (NAS). For example, reinforcement learning (Zoph & Le, 2016) or gradient-based approaches (Cai et al., 2019) have been proposed for learning to generate optimal architectures. Even though NAS procedures have been shown to produce more accurate, more compact and faster neural networks, the computational cost of the search can be an order of magnitude higher than training a generic architecture (Zoph et al., 2018).
Effort to Steal Deep Learning Systems. Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim’s hardware. Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements (Hua et al., 2018) or electromagnetic emanations (Batina et al., 2019). These attacks are not applicable in the cloud setting we consider. The remote attacks that are applicable in the cloud setting, on the other hand, have limitation of requiring precise measurements that are impractical in the cloud (Duddu et al., 2018). Further, the attack without this limitation (Hong et al., 2018) requires the attacker to know the family the target architecture comes from; thus, it cannot steal novel architectures. In our work, we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting.
3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD
3.1 THREAT MODEL
We consider an attacker who aims to steal the key components in a novel DL system, i.e., a novel pre-processing pipeline or a novel network architecture. We first launch a Flush+Reload (Yarom & Falkner, 2014) attack to extract cache side-channel information leaked by DL computation. Our target setting is a cloud environment, where the victim’s DL system is deployed inside a VM—or a container—to serve the requests of external users. Flush+Reload, in this setting, is known to be a weak, and practical, side-channel attack (Liu et al., 2015). Further, as in MLaaS products in the cloud, the victim uses popular open-source DL frameworks, such as PyTorch (Benoit Steiner, 2019) or TensorFlow (Abadi et al., 2016).
Capabilities. We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim’s system. Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques (Ristenpart et al., 2009; Zhang et al., 2011; Bates et al., 2012; Kohno et al., 2005; Varadarajan et al., 2015). Due to the co-location, the last-level cache (L3 cache) in the physical host is shared between multiple cores where the attacker’s and victim’s processes are; thus, our attacker can monitor the victim’s computations leaked at the L3 cache. We also note that, even if the victim uses GPUs, our attacker can still observe the same computations used for CPUs via cache side-channels (see Appendix A).
Knowledge. We consider our attacker and the victim use the same version of the same open-source DL framework. This is realistic, in MLaaS scenarios such as AWS SageMaker or Google Cloud’s AutoML, as cloud providers recommend practitioners to use the common frameworks to construct their systems. These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique.
For example, AWS provides convenient deployment options for both PyTorch and TensorFlow: https://docs. aws.amazon.com/sagemaker/latest/dg/pytorch.html, and https://docs.aws.amazon.com/sagemaker/latest/dg/tf.html.
3.2 FLUSH+RELOAD MECHANISM
Flush+Reload allows an adversary to continually monitor victim’s instruction access patterns by observing the time taken to load them from memory. This technique is effective to extract the computation flow of the victim’s program when the attacker and victim share memory (i.e., a shared library or page deduplication (Bosman et al., 2016)). The attacker flushes specific lines of code in a shared DL framework from the co-located machine’s cache-hierarchy and then measure the amount of time it takes to reload the lines of code. If the victim invokes the monitored line of code, the instruction will be reloaded into the shared cache, and when the attacker reloads the instruction, the access to it will be noticably faster. On the other hand, if the victim does not call the monitored line of code, the access to it will be slower because the instruction needs to be loaded from main memory (DRAM). By repeating this process, our attacker can tell when a victim has accessed a line of code.
3.3 OVERVIEW OF OUR ATTACK PROCEDURE
In Figure 1, we illustrate our attack procedure. We split the steps into two phases: the online phase and the offline phase. In the online phase (step 2©), the attacker needs co-location to monitor the computations from the victim’s system. In the offline phase (steps 1©, 3©, 4©, and 5©) attacker does not require the co-location with the victim.
1© First, our attacker analyzes the open-source DL framework to identify the lines of code to monitor. The attacker monitors the first line of each function that corresponds to the start of a DL computation. 2© Next, the attacker spins up a co-located VM and launches the Flush+Reload attack to extract the trace of victim system’s function calls. As the trace does not depend on the input sample, we only require to extract a single trace from one full invocation of the victim system. 3© Since the raw observations with Flush+Reload are noisy, the attacker applies filtering to highlight the regularities of DL computations reflected in the trace. 4© To estimate the architectural parameters, e.g., the input/output channels, kernel size, or strides, our attacker creates a lookup tables of timings and performed number of matrix multiplications by collecting traces from various parameter combinations. 5© Finally, using the victim’s computational trace lookup tables for estimating architectural parameters, the attacker starts the reconstruction process to steal the victim’s DL system (Sec 4).
3.4 MONITORING THE TOY NETWORK COMPUTATIONS VIA FLUSH+RELOAD
Experimental Setup. We implement our attack on Ubuntu 18.04 running on a host machine equipped with the Intel E3-1245v6 3.7GHz processors (8 cores, 32GB memory and 8MB cache shared between cores). For the step 1©, we analyze two popular open-source DL frameworks, PyTorch and TensorFlow, and identify the list of functions to monitor (see Appendix C for the full list of functions). We leverage the Mastik toolkit (Yarom, 2016) to launch the Flush+Reload attack, and while a victim DL system is running on a VM, our attacker monitors the list of functions—step 2©. For the reconstruction process conducted in offline after the extraction, we use Python v3.6 to implement the procedure.
https://www.python.org
ToyNet Results. In Figure 2, we demonstrate the extracted trace via Flush+Reload while ToyNet is processing an input. ToyNet is composed of one convolution followed by a batch-norm and one depthwise convolution followed by a batch-norm and a ReLU activation. The 1st convolution has the parameters (in, out, kernel, stride) as (3, 10, 3, 1), and the depthwise convolution’s parameters are (10, 10, 1, 1). The network has a skip connection that adds the intermediate output (from the 1st convolution) to the final output. During inference, we feed in an input with dimensions 3x32x32.
In the middle panel of Figure 2, we also show the raw—noisy—trace from the Flush+Reload output. The trace only includes cache-hits where the attacker’s accesses to the lines of code are faster, i.e., when the victim invokes the function. Each element of the trace includes a timestamp and a function name. The name corresponds to the ToyNet layers, such as Conv2d and BatchNorm2d, and it also contains additional information such as the tensor (add) and the BLAS operations, e.g., GEMM(oncopy).
Our attacker filters the raw trace according to the regular patterns in the DL computation. For example, a long function call, e.g., Conv2d in the ToyNet trace, can appear multiple times in the trace, as the cache can hit multiple times during Flush+Reload. In this case, we condense the multiple occurrences into a single invocation using a heuristic based on how close the timestamps are. We also observe the matrix multiplications such as GEMM(conv) and GEMM(oncopy) while DL computation is being processed. We count the individual occurrences and sum them up them based on the timestamp. After obtaining the processed trace (in the right panel), the attacker starts the reconstruction procedure.
4 RECONSTRUCTING NOVEL DEEP LEARNING SYSTEMS
After processing the Flush+Reload trace, our attacker reconstructs the key components of the victim’s DL system. In this process, the attacker aims to generate the candidate computational graphs of the victim system and to eliminate the incompatible candidates by estimating the correct parameter set for each computation. For instance, in our ToyNet example, the attacker wants to identify the computational orders and the location of the start and end of a branch connection (computational graph). Also, the same attacker wants to estimate the parameters for each computation; for example, the input/output channels and the kernel size in the 1st Conv2d. In this small network that has one branch, there are only 10 candidate computational graphs; however, considering all possible combinations of parameters, this will result in untractable number of candidates. Prior work, in reconstruction, only considered generic architectures such as VGGs or ResNets with the unrealistic assumption that an attacker knows the architecture family (backbone); however, as our aim is to steal novel DL systems, we do not make this assumption. To overcome this problem, we design a reconstruction procedure, which we describe next.
Knowledge of Our Attacker in Reconstruction. Here, we consider our attacker knows what tensor operations and functions to monitor in the victim’s open-source DL framework. These functions are model-independent; they correspond to architectural attributes designated by the deep learning framework (see Appendix C). We show that this knowledge is sufficient to reconstruct novel datapreprocessing pipelines, such as MalConv, that are usually shallower than the network architectures.
To reconstruct the deeper network architecture automatically designed by NAS algorithms, we assume our attacker has some knowledge about the NAS search space—e.g., NASNet search space (Zoph et al., 2018)—the victim’s search process relies on. This knowledge includes the list of layers used and the fact that a set of layers (known as blocks) are repeatedly used such as Normal and Reduction Blocks in NASNet. We make this assumption because, from the sequence of computations observed via Flush+Reload, our attacker can easily identify a set of layers and the repetitions of the layers. However, we do mot assume how each block is composed by using the layer observations directly; instead, we identify candidate blocks by using a sequence mining algorithm. We demonstrate that, under these assumptions, our attack reconstructs the ProxylessNAS-CPU in 12 CPU hours rather than running a NAS algorithm from scratch that takes 40k GPU hours.
4.1 OVERVIEW OF OUR RECONSTRUCTION PROCEDURE.
We first focus on the invariant rules in the computations used for DL computations. For instance, there are unary operations and binary operations. The tensor addition used to implement a skip connection is binary operation; thus, our attacker can supplement the reconstruction process by pruning the incompatible candidates. We also exploit the fact that computation time is proportional to the number of element-wise multiplications in a computation. In the ToyNet example, the time the 1st convolution (2 million cycles) takes is shorter than the 2nd depthwise convolution (3.668 million cycles); thus, our attacker further eliminates the candidates by comparing the possible parameters for a computation with her offline profiling data—the lookup table.
Our reconstruction procedures consist of two steps:
1© Generation: The attacker can generate the candidate computational graphs from the Flush+Reload trace based on the invariant rules in DL computations. Using the rules, our attacker reduces the number of candidates significantly. 2© Elimination: Our attacker compares the time for each computation takes with profiling data and prunes the incompatible candidates. We estimate the parameters sequentially starting from the input. When the output dimension from a candidate does not match with the observation, we eliminate.
Error Metrics. To quantify the error of our reconstruction result, we use two similarity metrics. First, we use the graph edit distance (GED) (Abu-Aisheh et al., 2015) to compare the reconstructed computational graph with that of the victim. Second, we use the `1-distance to compute the error between the estimated architectural parameters and those in the victim system.
Victims. We first reconstruct the MalConv (Raff et al., 2018), a novel data pre-processing pipeline that converts the binary file into a specific format so that a neural network can digest easily. Also, we show that our attacker can reconstruct the novel ProxylessNAS (Cai et al., 2019) architecture that shows the improved accuracy on the ImageNet classification with the less computational cost on a CPU.
4.2 RECONSTRUCTING NOVEL PRE-PROCESSING PIPELINES
Here, we elaborate the reconstruction process of the MalConv (Raff et al., 2018) data pre-processing pipeline. MalConv receives the raw bytes of .exe files and determines whether the file is malicious or not. The uniqueness of MalConv comes from the way that it treats the sequence of bytes: 1) Code instructions in binary file are correlated spatially, but the correlation has discontinuities from function calls and jump commands that are difficult to capture by the sequence models, e.g., RNNs.
Note that ProxylessNAS starts its searching process from a backbone architecture such as NASNet; thus, even if the paper reported a search took 200 GPU hours, this number does not include the time spent searching a backbone architecture, i.e., the 40k GPU hours to find NASNet.
2) Also, each sequence has the order of two million steps which far exceedes the length of an input to any previous neural network classifier. MalConv tackles this problem by pre-processing the sequence of bytes (Figure 3). It first splits the upper four bits and the lower four bits (narrow operations) of a byte information; this helps the network capture the locality of closer bytes and distant bytes. Next, the pipeline uses one dimensional convolution to extract such localities and performs the element-wise multiplications of two outputs. Before feeding this information to the neural network, the pipeline uses max-pooling to reduce the training time caused by processing inputs with large dimensions. All these heuristics are examined manually (see Section 4 of the original paper); thus, our attacker can save time and effort by stealing the pipeline.
Generate Computational Graphs. The first step of our attacker is to reconstruct the computational graph candidates for the victim pipeline from the Flush+Reload trace. As we can see in the trace in Figure 3, the attacker cannot simply connect the components in the traces sequentially because the branch connection, e.g., [7] * (multiply). Also, from this component, our attacker knows which is the end of a branch but cannot know when the branch has started. We solve this problem by populating all possible candidates and pruning them later with the parameter estimation.
Our algorithm populates the candidate computational graphs and the sample candidates found (see Appendix E). Our solution uses a recursive algorithm. Given a trace from Flush+Reload (T ), we pop each computation t from the back and construct the list of candidates l. At a high-level, the algorithm first traverses all the possible connections starting from the last computation to the first by using recursion. Then, when the base condition is met (i.e., the algorithm arrives the first computation, Embeddings), we backtrack the recursions to construct the list of candidate computational graphs. We focus on the computation type in this backtracking process; there are unary and binary computations. For the unary operations, we simply connect the current and preceding computations. However, for the binary operations, we split all the preceding computations into a set of two lists. Each set of two lists corresponds to a branch, and we continue backtracking for each branch and include all of the construction into our results. At the end, we found 20 candidates.
Eliminate Candidates with Computational Parameters. Next, our attacker further prunes the candidates based on the computational parameter estimation process. Our attacker, most importantly, focuses on the fact that computation time is dependent on the size of the matrix multiplication. This enables our attacker to profile the computational time taken for a set of param-
MalConv Novel Pre-processing Pipeline Processed Trace
eter combinations in advance. The attacker is able to perform this offline by taking advantage of cloud infrastructure: that the hardware and software stacks composing the cloud are consistent. In the MalConv reconstruction, we profile the timing of the convolution and linear operations. For the convolutions, we consider input/output channels {1, 2, 4, 8, 16, 32, 128, 256}, kernels {1, 3, 5, 7, 11, 100, 200, 500, 1k, 10k}, and strides {1, 2, 5, 10, 100, 200, 500, 1k, 10k}. For the linear layers, we use input {4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048} and output dimensions {1, 10, 16, 20, 32, 40, 100, 128, 256, 512, 1k, 1024, 2048}. Once our attacker has the timing profiles with these parameter combinations, the attacker defines the potential parameter sets for the convolutions and linear layers. Then, the attacker checks, in each candidate, if the computational graph returns the correct output dimension (1,) for the input (8, 2000000). In this pruning process, there are the other operations such as Sigmoid, * (multipy), transponse, narrow, or pooling. We applied the universal rules for each case: 1) the Sigmoid and multiply do not change the input/output dimensions, 2) the transpose only swaps two dimensions in an input, 3) the narrow slice one chosen dimension, e.g., (8,2000000) to (4,1000000); thus we consider all the possible slice in checking, and 4) the pooling only requires us to estimate its window size, so we match this value to the stride of a preceding convolution. At the end of this parameter estimation, we can narrow down to only one architecture with the correct set of computational parameters, i.e., 0% error.
4.3 RECONSTRUCTING NOVEL NETWORK ARCHITECTURES
Here, we show our attacker is able to steal a novel network architecture by describing the reconstruction process of the ProxylessNAS-CPU (Cai et al., 2019) that improves the accuracy of existing architecture, MobileNetV2 (Sandler et al., 2018), and also reduces the computation time. Indeed, the NAS search procedure warm-starts from an over-parameterized MobileNetV2 as a backbone; however, in our attack, we hypothesize our attacker is not aware of the backbone. Instead, we assume our attacker only knows the search space of MNasNet (Tan et al., 2019) (see Appendix D) where the authors come up with the MobileNetV2, opposed to the recent attacks in Sec 2.
Knowing the search space does not, however, reduce the amount of efforts by our attacker in reconstruction. The network architectures found from the NAS procedure commonly are wide and deep, and they include multiple branch connections; thus, our attacker requires to consider exponential number of candidate computational graphs and the computation parameters, which makes the attack infeasible. To tackle this issue, we focus on the NAS procedure—this process factorizes the entire architecture into blocks by their functions. For instance, NASNet (Zoph et al., 2018) is composed of normal cells (blocks) and reduction cells. Within each block, the process considers the architecture combinations that provides the optimal performance. Thus, we first identify the potential blocks before we initiate the process for reconstructing candidate computational graphs.
Identifying Candidate Blocks. We utilize the frequent subsequence mining (FSM) method to identify the blocks composing the ProxylessNAS-CPU architecture. Our FSM method is simple: we iterate over the Flush+Reload trace with the fixed windows and count the occurrences of each subsequence. Since the attacker knows that in the search space that the victim uses, a maximum of nine computations are used to compose a block, we consider the window size from one to nine. Once we count the number of occurrences for each subsequence (candidate blocks), and we prune
https://aws.amazon.com/ec2/instance-types/
them based on the rules in the search space: 1) a Conv2d operation is followed by a BatchNorm, 2) a block with a DepthConv2d must end with a Conv2d and BatchNorm (for a depthwise separable convolution), 3) a branch connection cannot merge (add) in the middle of the block, and 4) we take the most frequent block in each window. In Table 2, we describe the 9 identified blocks. We then run the generation process of reconstructing candidate computational graphs with the blocks instead of using each computation in the trace. At the end, we have 180,224 candidate computational graphs.
Eliminate Candidate with Computational Parameters. For each candidate composed of known blocks, our attacker estimates the computation parameters. However, the number of parameter combinations are also exponential; for example, within the search space, a Conv2d can have any number of input/output channels, kernel size {1, 3, 5}, and strides {1, 2}. Thus, we focus on the computation rules in a block. 1) We first found that DepthConv2d is only possible to have the same input/output channels. Also, the channel size can be identified by the number of GEMM(conv) operations. For instance, in Figure 4, the DepthConv2d has 143 GEMM(conv) invocations, which is close to the channel size. Since commonly the operation has an even number of channels, the attacker can easily reduce the candidates to 142 or 144. 2) We also know that the number of GEMM(oncopy) invocations is proportional to the matrix multiplication size in a Conv2d; thus, the attacker can compare the offline profiling results with the processed traces and estimate the parameters. For instance, the 1st Conv2d has 20 GEMM(oncopy), and we approximately have a set of input dimensions, e.g., (20- 30,112,112) from the previous block estimation. Thus, our attacker only profiles the variations of input channels {20´30}, kernels {1, 3, 5}, and strides {1, 2}—total 60 cases and check if there is a match. Moreover, 3) the Conv2d after DepthConv2d is the pointwise linear operation whose kernel and stride is one, which will further reduce the attacker’s efforts. Our attacker runs this elimination process and finally narrows down to only one architecture with the correct set of computational parameters, i.e., 0% error.
5 DISCUSSION
In this section, we discuss defense mechanisms that prevent our attacker from reconstructing the victim’s DL system with an exact match. Prior work on defenses against cache side-channel attacks proposed system-level solutions (Kim et al., 2012; Liu et al., 2016; Werner et al., 2019). However, applying them requires infrastructure-wide changes from cloud providers. Also, even if the infrastructure is resilient to cache side-channel attacks, an attacker can leverage other attack vectors to leak similar information. Thus, we focus on the defenses that can be implemented in DL frameworks.
We design our defense mechanisms to obfuscate what the attacker observes via cache side-channels by increasing the noise in computations supported by DL frameworks. We discuss four approaches that blend noise into components of a DL framework; however, these mechanisms introduce a computational overhead by performing additional operations. This highlights that defending against our attack is not trivial and efficient countermeasures require further research.
Padding Zeros to the Matrix Multiplication Operands. Our reconstruction algorithm estimates the computational parameters such as kernel sizes or strides based on the time taken for matrix multiplication. Hence, we consider increasing the size of operands randomly by padding zeros to
them. We keep the original sizes of the operands and, after the multiplication of augmented tensors, we convert the resulting tensor into that of the correct dimensions by removing the extra elements. With the augmentation, our attacker finds it difficult to reconstruct the victim’s DL system exactly by monitoring a single query. However, if our attacker can observe computations with multiple queries, the attacker can cancel-out the noise and estimate the parameters correctly.
Adding Null/Useless Network Operations. This reconstruction attack assumes all the computations observed in the Flush+Reload trace are used to compute the output of a DL system. Thus, a defender can modify the victim’s architecture so that it includes the identity layers or the branches whose outputs are not used. We hypothesize a small number of null/useless operations will not increase the attacker’s computational burden significantly; this addition only increases the time needed to reconstruct the victim’s architecture by a few hours. If the defender includes an excessive amount of null/useless layers or branches, this can significantly increase the reconstruction time. However, this defense suffers from two issues: 1) the defense may still not make the reconstruction impossible, and 2) the victim also requires to perform the additional operations, which increases network evaluation time significantly.
Shuffling the Computation Order. We have seen in popular DL frameworks that, once a network architecture is defined, the computational order of performing operations is also invariant. We are able to shuffle the computation order of the victim’s DL system each time when the system processes an input. In particular, we can identify the dependency of operations in a victim’s DL system and compute the independent operations in a different order each time. This approach will make the observations from cache side-channels inconsistent, which results in the exponential number of candidate architectures that our attacker needs to consider. However, to compute the independent operations separately, the defender needs to store intermediate results in memory while processing an input; thus, this approach increases the space overhead of the DL computations.
Running Decoy Operations in Parallel. Lastly, we can make a DL framework run separate networks (decoy operations) in parallel on the same physical host. These networks obfuscate what our attacker will observe via Flush+Reload. Here, the attacker cannot reconstruct the victim architecture by monitoring a single query because the computational order does not reflect how the victim’s architecture is defined. However, if our attacker can observe the computations over multiple queries, the attacker can use the frequent sequence mining (FSM) that we used in the block identification to identify a repeated set of operations and can reconstruct the victim architecture. This defense also increases network evaluation time by running extra operations on the same machine.
6 CONCLUSIONS AND FUTURE WORK
This work presents an attack that reconstructs a victim’s novel DL system through the information leakage from a cache side-channel, Flush+Reload. We steal key components of the victim’s system: a novel pre-processing pipeline and a novel network architecture. Observing the DL computations and the time to complete each computation enables the attacker to populate all candidate computational graphs and prune them with our parameter estimation process. In experiments, we demonstrate the feasibility of this reconstruction attack by reconstructing MalConv, a novel pre-processing pipeline for malicious file detection, and ProxylessNAS-CPU, a novel architecture for the ImageNet classification optimized to run on CPUs. We do this with 0% error. As novel DL systems become trade secrets, our results highlight the demands for future work on countermeasures against model theft.
ACKNOWLEDGMENTS
We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the Department of Defense, by NSF grants #CNS-1933033, #CNS-1840893, #CNS1453045 (CAREER), by a research partnership award from Cisco and by financial assistance award 70NANB15H328 from the U.S. Department of Commerce, National Institute of Standards and Technology. We would like to thank the NSF REU-CAAR program (NSF grant #CCF-1560193).
A APPLICABILITY TO GPUS
Our attack is not fundamentally different for GPUs. In most deep learning frameworks, when a network performs a computation, it invokes the same function implemented in C++ and the function decides whether the back-end computation can use GPUs or not. This practice maximizes the hardware compatibility of a framework; however, this also makes the framework vulnerable to our attacker who can still observe the common functions listed in Table 3 by monitoring the shared cache. On GPUs the timings would be different, so we would have to profile the computational times, e.g., the time taken for the matrix multiplication with various sizes of tensor operands. However, on both CPUs and GPUs, the computation time is proportional to the size of tensor operands, which enables our attacker to estimate the architecture parameters with timing observations.
B DL COMPUTATIONS MONITORED IN PYTORCH AND TENSORFLOW
Figure 5 describes the reconstruction process of a small network in both the PyTorch and TensorFlow frameworks. On the left, we have the ground truth of the ToyNet architecture, which represents an example of a possible residual block in a victim network. In the middle and right, we show the observations of an adversary monitoring both PyTorch and TensorFlow code. The first entry indicates the monitored function corresponding to the desired architectural attribute. The second entry indicates the timestamp at which the adversary observes these functions, and the last entry is the number of general matrix multiplication (GEMM) function calls for the given layer observation.
Naming conventions vary slightly between the two frameworks, but the information inferred is the same. The adversary attacking both networks sees functions calls that correspond to architectural attributes in the same order: Conv2d, BatchNorm2d, Conv2d/DepthwiseConv, BatchNorm2d, ReLU6, and TensorAdd. PyTorch does not distinguish between Conv2d and DepthwiseConv, but as stated in 4.1, we can differentiate the layers by timing data. Additionally, PyTorch and TensorFlow use different linear algebra libraries to perform matrix computation, so the implementations differ slightly. However, they both use variations on matrix multiplication algorithms that take into account system level optimizations, such as cache size (e.g. Goto’s algorithm). In both cases, we observe operations in nested iterations of these implementations and are able to monitor instructions that correspond to the size of the matrices being multiplied, giving an adversary the ability to estimate the parameters of the convolution layers.
To perform the estimations of these layer parameters, the adversary can profile candidates offline on similar hardware. They can then create a data set of candidate parameters for given observation ranges. For instance, the number of observed GEMM calls in the PyTorch example for the depthwise convolution layer gives the attacker the information that there are 10 output channels, and therefore also ten output channels in the 1st convolution. Additionally, the observed GEMM calls for the 1st convolution layer give the candidate kernel sizes of 3 and 5. Likewise in TensorFlow, the observed instructions fits the candidate kernel sizes of 3 or 5, and 0-24 output channels. Therefore, these
Customization of operations in TensorFlow: https://www.tensorflow.org/guide/create_op
exploitable vulnerabilities exist independent of the specific deep learning framework a victim is using.
C LIST OF FUNCTIONS MONITORED VIA FLUSH+RELOAD
Table 3 shows the exact lines of code we monitor in the PyTorch and TensorFlow frameworks. We use PyTorch v1.2.0 and Tensorflow v1.14.0. In both the frameworks, we are able to monitor a similar set of DL computations in the C++ native implementations. However, the back-end libraries supporting the matrix multiplications are different, i.e., PyTorch is compiled with OpenBLAS whereas TensorFlow uses Eigen and MKL-DNN. Even if the libraries are different, the multiplications are implemented using GOTO’s algorithm (Goto & Geijn, 2008). Therefore, we monitor the number of iterations of for-loops to estimate the overall size of a matrix multiplication.
D MNASNET SEARCH SPACE
Tan et al. (2019) utilize a hierarchical search space over six parameters: ConvOp, KernelSize, SERatio, SkipOp, FilterSize, and #Layers. They choose to partition a CNN into a known, finite set of blocks and then further divide these blocks into possibly repeated layers. The number of repeats per layer in a given block i is a searchable parameter Ni, which is bounded at ˘1 the number of layers in MobileNetV2 on which block i is based. These layers are further divided into three possible
https://github.com/pytorch/pytorch/commit/8554416a199c4cec01c60c7015d8301d2bb39b64
https://github.com/tensorflow/tensorflow/commit/87989f69597d6b2d60de8f112e1e3cea23be7298
network layers (ConvOp): regular convolution, depthwise convolution, or mobile inverted bottleneck convolution. Additionally, the network layer parameters can vary. These parameters include the convolution kernel size (KernelSize), the squeeze-and-excitation ratio (SERatio), a possible skip op (SkipOp), and the output filter size (FilterSize). The squeeze-and-excitation ration (SERatio) of a given layer varies between 0 and 0.025; the convolution kernel size varies between 3 and 5; the skip op is either pooling, identity residual, or no skip; and the filter size varies between 0.75, 1.0, and 1.25 the filter size of the corresponding block in MobileNetV2. Overall, this gives an claimed typical search space size of 1013 possibilities with 5 blocks, 3 average layers per block, and 432 options for the sub search space of a each block. This size compares to the per-layer approach with the same parameters that has a search space size 1039.
E SEARCHING CANDIDATE COMPUTATIONAL GRAPHS | 1. What is the main contribution of the paper regarding computer security and deep learning?
2. What are the strengths and weaknesses of the proposed method, particularly in its application to deep neural networks?
3. Do you have any concerns about the practicality of the approach, especially with regards to GPU usage and version compatibility?
4. How would you assess the effectiveness and limitations of the method based on the provided experimental results?
5. Are there any potential ethical or legal issues related to using this method to infer DNN architectures without permission? | Review | Review
Summary
---
This paper proposes to use a computer security method, "Flush+Reload" to infer the DNN architecture of a victim in the setting where both the attacker and the victim share the same machine in a cloud computing scenario. This does not require any physical access to the machine, however it does require that a CPU is shared, and the inference of the architectural details is based on the time it takes to reload computations from cache.
The paper is overall clear and well written.
Motivations of the paper
---
However, concerning the motivations of the paper, I'd like some clarifications. As far as I know, in the deep learning community, the most effective architectures are published and public (VGG, Inception, ResNet, Transformer...).
I am a bit confused by the sentence "As a result, in the industry such novel DL systems are kept as trade secrets or
intellectual property as they give their owners a competitive edge (Christian & Vanhoucke, 2017)." which justifies that architectures are kept secret and thus may be prone being stolen.
This US patent is public and explains the method. As far as I know, it has never been enforced. Furthermore, this patent is associated with the paper "Going deep with convolutions", Szegedy et al. which introduced the Inception architecture, is public, very well-known, and thus I do not believe anyone would have any commercial interest in stealing it.
Furthermore, I do have the impression that the edge many companies have over their competitors is the private datasets they own much more than the architectural details.
Method and applicability
---
While the method Flush+Reload itself is not novel, its application to the DNNs case and the way to reconstruct the architecture (generating the candidates, pruning) is.
However, I do have some practical concerns about the applicability of the method.
As far as I understand, it can only work on one CPU. Most DNNs, even for inference, are run on (one or multiple) GPUs. Can the method be extended to work on GPUs?
Also, while the assumption that both the attacker and the victim use the same framework is realistic to me, I believe, they should also both use the same version of the said library, no? Otherwise some operations might be faster in some versions and slower in others, this is thus an additional and much stronger assumption to make.
At last, this would require the victim to use a public cloud service. However, as far as I know, many of the companies who could potentially design new architectures have their own private cloud. I am not certain that someone disposing of a new, private, and powerful architecture would use it on a public cloud service.
Experiments
---
The experimental section seems very limited to me. The authors show that they are able to reconstruct perfectly 2 architectures. While this is encouraging, I would like to see the limits of the proposed method.
Why not generate N random (or not so random) architectures and try to reconstruct them? Where does the method fail, where does it succeed?
What if the victim used a custom layer that the method could not recover? Does it still recover a similar architecture?
Conclusion
---
While the paper, proposed to use Flush+Reload for recovering DNNs architectures and succeeds for at least 2 non trivial architectures, I do not recommend acceptance.
First I am concerned by the problem this paper is tackling. Can this realistically happen in a real-life scenario?
Second, I am worried that the method suffers from very strong limitations in practice (eg the usage of a CPU for both victim and attacker).
Finally, and importantly, while the experiments show some interesting first results, they are limited, I am not able to judge the strengths and weaknesses of the method, and thus I cannot assess the usefulness of the proposed method.
Note: I have to say that this paper is definitely out of my area of expertise, even though I am confident in my understanding of the paper, it may be that some of my concerns are unfounded. If this is the case I will adjust my score accordingly. |
ICLR | Title
Speech-MLP: a simple MLP architecture for speech processing
Abstract
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
N/A
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
1 INTRODUCTION
As in many machine learning disciplines, speech processing is embracing more and more complex models, where transformer (Vaswani et al., 2017) is a particular example. It was first proposed to tackle machine translation, and afterwards was successfully applied to multiple research fields such as natural language processing (NLP) (Devlin et al., 2018) and computer vision (CV) (Dosovitskiy et al., 2020). The core of the transformer model is a self-attention mechanism, by which any two elements in a sequence can interact with each other, hence capturing long-range dependency. Considering that speech signals are naturally temporal-dependent, researchers in the speech community recently explored transformer-based models in multiple speech processing tasks, and remarkable performance was reported in speech recognition (Dong et al., 2018; Karita et al., 2019; Huang et al., 2020), speech enhancement (SE) (Kim et al., 2020; Fu et al., 2020), keyword spotting (KWS) (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021) and speech synthesis (Li et al., 2019). Recently, the conformer architecture, which combines convolution and self-attention, achieved excellent success in speech processing tasks and attracted much attention Gulati et al. (2020).
In this paper, we ask the following question: Do we need complex models such as transformers for certain speech processing tasks?
This question is closely related to the principle of ‘parsimony of explanations’, a.k.a., Occam’s razor (Walsh, 1979). According to this principle, if there is any possibility, we should seek the models that can represent the data with the least complexity (Rasmussen & Ghahramani, 2001; Blumer et al., 1987). However, in the public benchmark tests, complex and elaborately designed models are often ranked higher, due to the better reported performance. For example, the KWS benchmark on Google
speech command1 and the SE benchmark on VoiceBank+DEMAND2, transformer-based models are among the top ranks. Although the good performance is celebrating, the increased model complexity implies potential over-tuning and over-explanation, the risk that the Occam’s razor principle intends to avoid.
We, therefore, attempt to discover the simplest neural architecture, that is powerful enough to achieve comparable performance as the best existing models, in particular transformers, while eliminating unnecessary complexity. Our design is based on domain knowledge, in particular, three properties of speech signals: (1) temporal invariance, (2) frequency asymmetry, and (3) short-term dependency (Huang et al., 2001; Benesty et al., 2008; Furui, 2018). Based on these knowledge, we build the speech-MLP, a simple multi-layer perceptron (MLP) architecture, shown in Fig. 1. Besides the normalization components, the architecture involves simple linear transformations only. The core of the architecture is the Split & Glue layer, which splits the channel dimension into multiple chunks, processes each chunk separately, and finally merges the processed chunks in order to attain the output. Speech-MLP processes each time frames independently (compatible to temporal invariance), and the splitting & gluing procedure allows different treatments for different frequency bands (compatible to frequency asymmetry), and involves the local context of multiple scales (compatible to short-term dependency),
We tested the model on two speech processing tasks: keyword spotting with the Google speech command V2-35 and Libriword benchmark datasets; and speech enhancement with the VoiceBank benchmark dataset. Results showed that on both tasks the proposed speech-MLP outperforms complex models, in particular models based on transformers. Such results demonstrate that by utilizing domain knowledge and employing appropriate normalization techniques, it is possible to design simple yet powerful models. In some cases, these simple models even beat complex models on open benchmarks, where complex models are more likely to obtain good performance by careful tuning.
In summary, we proposed Speech-MLP, a simple yet effective neural model to represent speech signal. On the KWS and SE tasks, we demonstrated that the simple model can achieve performance comparable to or even better than transformers with less parameters and inference time. Our work shows that by taking domain-knowledge into account, it is possible to remove unnecessary complexity (e.g., modeling for the long-range dependency in KWS and SE) in model design, as advocated by the Occam’s razor.
2 RELATED WORK
Recent research has shown that a simple model can be as effective as complex and task specific models such as transformers in some important tasks. In (Tolstikhin et al., 2021), for example, the authors proposed a simple architecture for vision, namely MLP-Mixer. The model receives a sequence of image patches and performs channel-wise and patch-wise linear projection alternatively and iteratively. Without using convolutions or self-attention, the Mixer architecture separates the per-location (channel-mixing) and cross-location (token-mixing) operations (Tolstikhin et al., 2021). While the channel-mixing MLPs enable communication between different channels, the token-mixing MLPs allow communication between different spatial locations (tokens). Tested on image classification benchmarks, MLP-Mixer achieved performance comparable to SOTA models, in particular the vision transformer model (Tolstikhin et al., 2021).
In another recent work (Liu et al., 2021), the authors investigated the need of the self-attention mechanism in transformers, proposing an alternative MLP-based architecture, namely gMLP. The model, based on MLP layers with gating, consists of a stack of L identical blocks. Each block comprises a normalization layer, a channel projection, followed by an activation function and a spatial gating unit, followed by another channel projection (Liu et al., 2021). It achieves similar performance when compared to the vision transformer model (Touvron et al., 2021b), being 3 % more accurate than the aforementioned MLP-mixter model with 66 % fewer parameters. The model was also successful on language modeling in the BERT setup (Liu et al., 2021), minimizing perplexity as well as Transformers. The authors also found that perplexity reduction was more influenced by the model capacity than by the attention mechanism.
1https://paperswithcode.com/sota/keyword-spotting-on-google-speech-commands 2https://paperswithcode.com/sota/speech-enhancement-on-demand
Inspired by vision transformers (Touvron et al., 2021b)(Dosovitskiy et al., 2020), in (Touvron et al., 2021a), the authors apply the skip connection technique from ResNet’s to MLP layers and propose the so-called Residual Multi-Layer Perceptrons (ResMLP). The model receives non-overlapping image patches, typically 16 × 16. These patches go through a linear transformation in order to attain d-dimensional embeddings. The embeddings are then fed to a sequence of ResMLP blocks to produce a set of d-dimensional output embeddings. An average pooling is applied on the ddimension output vector to represent the image, a linear classifier is used then to predict the label associated with the image (Touvron et al., 2021a).
Differently from Mixer-MLP, gMLP and ResMLP, CycleMLP can process inputs of arbitrary resolution with linear computational complexity as its receptive fields are enlarged for context aggregation (Chen et al., 2021). The model is based on Cycle Fully-Connected Layer (Cycle FC), serving as a generic, plug-and-play transformer-free architecture. Results show CycleMLP outperforming existing MLP-like models on ImageNet classification, achieving good performance on object detection, instance segmentation and semantic segmentation (Chen et al., 2021).
The aforementioned research highlights that, despite their success, convolution and self-attention mechanisms are not mandatory for some CV and NLP tasks, and can be replaced by simpler layers such as MLP with a customized design. Although typical convolution operations are not used by these MLP solutions (but rather 1 × 1 convolution as pointed out in (Chen et al., 2021) and (Tolstikhin et al., 2021)), these MLP approaches are inspired by CNN architectures for computer vision related tasks. Their building block, nonetheless, is similar and based on applying linear transformation on spatial locations and feature channels.
Although inspired by these new MLP architectures, speech-MLP focuses on speech signals rather than images. This implies in processing different input resolutions given the nature of the input signal. The split & glue layer is very similar to a separable CNN (Chen et al., 2018), if we regard the frame-independent processing as 1-D convolution in time. In particular, it is essentially a group-wised CNN (Romero et al., 2020) with different kernels for each group. However, from the perspective of feature learning, the entire split & glue is an MLP if our focus is a particular frame (within a context). That is why a 1-D convolution is often called a time-delay neural net (TDNN) (Waibel et al., 1989). We follow this convention and name our structure as speech-MLP.
A key motivation of the speech-MLP structure is to respect the properties of speech signals. It should be emphasized that almost all successful techniques in speech processing take these properties into account, for instance the hidden Markov model (HMM) assumes short-term dependency (Rabiner & Juang, 1986), TDNN assumes temporal invariance (Waibel et al., 1989), and frequency asymmetry is explicitly implemented in the famous MFCC feature (Mermelstein, 1976). In this paper, the role of knowledge of speech signals is to help remove unnecessary complexity, i.e., seeking the minimum structure that make reflect these basic properties.
Finally, MLP is not new in speech processing; in fact the neural models used in early days in speech processing are all general MLPs, e.g., (Bourlard & Morgan, 2012). Speech-MLP is a special designed MLP, by taking the properties of speech signals into account.
3 METHODOLOGY
Our model, referred to as speech-MLP, is presented in Figure 1. Note that for a given speech waveform, a sequence of acoustic features, denoted by X = {x1, x2, ..., xn}, are first extracted. These features are then fed into N stacked speech-MLP blocks and the output of the last speech-MLP block is a speech representation that needs to undergo task-specific layers in order to perform specific tasks, such as the ones addressed in this study: SE and KWS.
Inside of each speech-MLP block, there are three components: (1) a linear transformation for a pre-projection of the extracted acoustic features; (2) a Split & Glue layer for processing the projected acoustic features while addressing frequency asymmetry and temporal dependency, and (3) another linear transformation for post-projection of the final representation. Two residual connections are also adopted to encourage gradient propagation. The first one maps the input features onto the output of the last linear transformation (i.e., the output of the post-projection operation). The second residual connection maps the output of the first linear transformation (i.e., the output of the pre-projection operation) onto the output of the Split & Glue layer. Note that normalization tech-
niques are also applied to regulate the feature distribution (by layer norm) and temporal variance (by instance norm). In the next section, we give more details on the Split & Glue layer, followed by a discussion on the normalization methods adopted in this work.
3.1 SPLIT & GLUE
Figure 2 depicts how the Split & Glue layer operates. The sequence of acoustic features is denoted by X ∈ RH×T , with T and H being, respectively, the length and the number of channels of the input sequence. The first step is to split X into K non-overlapping chunks, as illustrated in both Figure 1 and Figure 2. The split referred to as X → {X1, .., Xk, .., XK}, is performed along the channel dimension. In our experiments, the channel dimension of each chunk is considered the same, leading to Xk ∈ RH/K×T . For each chunk, Xk, a context expansion is then performed through the so-called unfolding operations. This results in context-expanded chunks, denoted by Xkw ∈ Rw kH/K×T , where wk is the size of the context window induced by the unfolding operation.
Note that the number of chunks K and the window size wk can be arbitrarily selected for each chunk. This flexibility allows us to represent multi-scale contexts by adopting different window sizes for different chunks. In Figure 2, for instance, the input channels are split into two chunks, and the window sizes are set to 3 and 5, respectively. This leads to the model learning from small and large contexts simultaneously.
The unfolded chunk Xkw is projected by a linear transformation, leading to a new representation for the initial chunk, Y k ∈ RĤ×T , where Ĥ could be set arbitrary and is called the number of Glue channels. We highlight that the linear transformation used in the above chunk-wise operation is shared across all the time steps for a single chunk, and each time frame is processed independently. This setting reduces the number of parameters and is compatible with the temporal invariance property of speech signals. Nevertheless, different weight parameters are adopted for different chunks, to provide sufficient flexibility.
Finally, all the learned speech representations, Y i, are concatenated along the channel dimension, forming a glued feature matrix Y G = {Y 1, Y 2, ..., Y K}. Following, another linear transformation
is applied in order to obtain the output feature Y ∈ RH×T . Again, the linear transformation is shared across all the time steps, to reflect temporal invariance.
3.2 NORMALIZATIONS
Normalization plays an important role in our speech-MLP model. We employed two normalization approaches: (1) layer normalization (LN) (Ba et al., 2016) and (2) instance normalization (IN) (Ulyanov et al., 2016).
Layer normalization is applied across the channel dimension at each time step. Thus, it computes statistics (mean and variance) on each column of X ∈ RH×T , and then uses these statistics to normalize the elements in the same column. With this normalization technique, the distribution of the feature vector at each time step is regularized.
Instance normalization is used to perform per-channel normalization. That is, the statistics are computed on each row of X ∈ RH×T and applied across the time steps to normalize the elements of each row. Thus, the temporal variation of each channel is normalized. Note that IN extends the conventional cepstral mean normalization (CMN) approach (Liu et al., 1993), by normalizing not only acoustic features, but also features produced by any hidden layer.
Empirically, we found that IN was only effective for the SE task while the LN was more important for the KWS task. Therefore, we apply LN only for KWS and IN for SE.
4 EXPERIMENTS
We evaluate the proposed speech-MLP model in two speech processing tasks: speech enhancement and keyword spotting. In this section, we introduce these tasks and their respective datasets, used in our experiments, followed by experimental settings, experimental results, and the ablation study.3
3The code will be available on github. To respect the double-blind review, the link will be sent to the reviewers when the discussion is open.
4.1 KEYWORD SPOTTING
Keyword spotting aims at detecting predefined words in speech utterances (Szöke et al., 2005; Mamou et al., 2007; Wang, 2010; Mandal et al., 2014). In our experiments, we explore two KWS datasets: (1) the Google speech commands V2 dataset (Warden, 2018), and (2) the LibriWords (Vygon & Mikhaylovskiy, 2021). The Google speech commands V2 dataset (here, referred to as V2-35) consists of 105, 829 utterances of 35 words, recorded by 2,618 speakers. The training, validation and test sets contain 84, 843, 11, 005 and 9, 981 utterances respectively. The LibriWords dataset, larger and more complex, is derived from 1000-hours of English speech from the LibriSpeech dataset (Panayotov et al., 2015). Signal-to-word alignments were generated using the Montreal Forced Aligner (McAuliffe et al., 2017) and are available in (Lugosch et al., 2019). The averaged duration of the keywords are 0.28 seconds. The provider defined four benchmark tests, based on the number of target keywords: LW-10, LW-100, LW-1K and LW-10K, where the target keywords are 10, 100, 1k and 10k respectively. More details on this dataset are presented in Appendix.
4.1.1 SETTINGS
We used the same architecture in all the KWS tasks, except that the dimension of the output layer was adapted to the number of keywords, as shown in Table 1. Note that we set the window size w to be {3, 7, 9, 11}. This allows us to exploit multi-scale contexts. Additionally, we set the stride to be 1 and appropriately set the padding list p to ensure that all the expanded features are in the same length and equal to that of the input feature.
Prior to the feature extraction step, each speech recording is resampled to 16 kHz. Then, 40- dimensional Mel-Frequency Cepstral Coefficients (MFCC) are attained as the acoustic features. The MFCC features are then projected target dimensional feature vector by a linear layer and then forwarded to speech-MLP blocks. The output features are then passed through a max-pooling operation collects the information across time steps. Finally, two linear layers with a GELU activation function in the middle and a softmax activation are employed in order to attain the posterior probabilities that the input speech belongs to each keyword. For regularization we used SpecAugment (Park et al., 2019), dropout (Baldi & Sadowski, 2013), and label smoothing (Müller et al., 2019) were used to prevent overfitting.
Three model architectures have been verified in all the experiments: a 180k small model denoted by Speech-MLP-S, a 480k large model denoted by Speech-MLP-L, and a 2375K extra large model
denoted by Speech-MLP-XL. The three models are different in the number of channels of the hidden layer (i.e., after the pre-projection) and the channels within the Split & Glue block (i.e., channels after Linear A, and layers in Fig. 2), as shown in Table 1.
For the experiments on the Google speech commands dataset, we applied the following data augmentation techniques: time shifting, audio re-sampling and noise perturbation: as in (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021). After augmentation, the data was increased to 10 times the size of V2-35. We set the batch size to be 256 and trained the model for 100 epochs on 4 cards V100 Nvidia GPU.
For the experiments on the LibriWords, the batch size was set to 1024, and we trained the model for 20 epochs on 2 cards V100 Nvidia GPU which showed to be enough for this dataset. The training schemes were set differently simply because Libriwords is huge and long-term training is not economic.
The performance of the proposed model is compared to three benchmarks. The first one referred to as Att-RNN, is a CNN-LSTM architecture with the attention mechanism introduced in (de Andrade et al., 2018). The model has approximately 202k trainable parameters and attains reasonable performance. Another recent solution, based on a transformer architecture is adopted as the second benchmark (Berg et al., 2021). We refer to this benchmark as KWT-K where K refers to different size of models. Res15 (Vygon & Mikhaylovskiy, 2021), another recent work based on ResNet reports high performance on both V2-35 and Libriwords. The authors reported results with two configurations, one trained by cross entropy (Res15-CE) and the other based on triple loss (Res15-TL). We use them as the third benchmark.
4.1.2 RESULTS
Table 2 presents the results of the benchmarks discussed in the previous section and the performance of the proposed Speech-MLP, the experimental results are presented by mean value and 95% confidence of 5 trials with different random seeds on V2-35. It can be observed that the Speech-MLP models outperform all the benchmarks with comparable model sizes. Note that the small version of speech-MLP, which contains less than half of the parameters of its large version, can still maintain reasonable performance, providing higher accuracy than most benchmarks. The performance of our solution on the Libriword dataset is even more significant. It outperforms Res15-CE and Res15-TL while being able to maintain performance across all LibriWord dataset sizes. Our conjecture is that by the knowledge-driven design, we can use the parameters more efficiently, which allows for the use of smaller models to handle large-scale tasks.
4.1.3 ABLATION STUDY
To investigate how each module impacts the performance of speech-MLP, we conducted an ablation study, in order to fair compare each model we use fixed random seed 123 in all ablation study experiments, we show that window list to {3} equivalent to use TDNN with kernel size to 3, and window list to {3, 3, 3, 3} equivalent the TDNN with 4 groups convolution operation with kernel size
to 3 in split & glue layer, and our proposed speech-MLP with a variance of window sizes outperform these existing solutions. We particularly focus on the chunk splitting, specially the number of chunks and the context window of each chunk. They are the only hyperparameters that we need to design in speech-MLP, by using domain knowledge.
The results are reported in Table 3. It can be observed that the setting for the number of chunks and the context window does matter. A longer context window is clearly beneficial, and setting different context windows for different chunks can further improve the performance. This confirms our conjecture that contextual information is important for representing speech signals, and exploiting multi-scale contextual information is especially important.
An interesting comparison is between the Speech-MLP-S model with window {3, 7, 9, 11} and the Speech-MLP-L model with window {1}. The parameters of the two models are comparable, but the latter model does not involve any chunk splitting and context expansion. The clear advantage of the Speech-MLP-S model demonstrated that the performance improvement with larger and multi-scale context windows (ref. performance of Speech-MLP-S or Speech-MLP-L with different windows) is due to the newly designed Split& Glue structure, rather than the increase in parameters. This in turn demonstrated the value of domain knowledge: if we can exploit it appropriately, it is possible to design very parsimonious models.
4.2 SPEECH ENHANCEMENT
Speech enhancement, which aims at inferring clean speech from its corrupted version (Benesty et al., 2006; Loizou, 2007; Das et al., 2020), is another fundamental task used to evaluate our model. We choose the Voicebank+Demand datasetValentini-Botinhao et al. (2016) to perform the SE test. It contains clean speech signals from the Voicebank dataset, includes 28 speakers for training and 2 speakers for testing. Noise signals of 40 types from the DEMAND atabase Thiemann et al. (2013) were selected and were mixed into the clean speech. After the mixing, the training set and testing set involve 11,572 and 824 clips respectively. We split the training utterances into segments of 3 seconds without overlap. This resulted into 17,989 training samples, each sampling consisting of a noise corrupted segment and the corresponding clean segment. The goal of SE is to learn a mapping function that converts a noisy segment to a clean segment.
4.2.1 SETTINGS
The architecture of our SE model is shown in Table 1. As input, the model receives a 257- dimensional log-magnitude spectrum. The extracted features are first projected by a linear layer and reduced to 256-dimensional feature vector, which are then forwarded to 10 stacked speechMLP blocks. The output from the last speech-MLP block is re-projected to 257-dimensional feature vector. After a hard-sigmoid function Courbariaux et al. (2015), the value of the output units correspond to the ratio masks on the 257-dimensional input log-magnitude spectrum. The clean speech signal is estimated by applying the ratio masks onto the noisy spectrum and reusing the noisy phase.
More details of the settings can be found in Appendix. The performance of the proposed model is compared to six benchmarks. Note that we focus on models trained without extra data, or extra models for knowledge distillation. The reader can find details on these enhancement methods in the references presented in Table 4. Following the convention on this test set, we report the results of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).
4.2.2 RESULTS
The results are shown in Table 4, where we choose 6 baseline systems for comparison. Among these systems, T-GAS (Kim et al., 2020) is based on a transformer model. Similar to speech-MLP, the authors of T-GAS also noticed the importance of local context and designed an annealing approach to encourage attention on neighbour frames. However the attention is still global in nature, and the improvement with T-GAS was still attributed to the capacity of transformers in learning (not so) long-range dependency. Note that the size of the T-GAS model was not reported in the original paper, so we made an estimation according to the structure description.
The results shown in Table 4 demonstrated that our speech-MLP model outperformed all the six baselines. In particular, without modeling any long-range dependency, it outperformed T-GSA by almost 100 times smaller of model size. This comparative results challenge the assumption that the better performance of T-GSA over other baselines is due to its capacity of capturing long-range dependence in speech. Moreover, the model size of Speech-MLP is much smaller than T-GSA, and due to the concise architecture, the training is simple and fast. It provides a strong support for our argument that complex models are not necessarily the best, and a knowledge-based model may easily beat complex models with parsimonious parameters.
5 CONCLUSIONS
In this paper, we propose the speech-MLP model, a simple MLP architecture for speech processing tasks. Our main motivation was to find a compact solution that eliminates unnecessary complexity while being able to capture essential information from speech signals. By utilizing domain knowledge of speech, we designed a simple yet effective structure that involves only linear transform and normalization. The main ingredient is a split & glue structure, which splits input features into multiple chunks and makes them accounting for different contexts. This knowledge-based design reflects several properties of speech signals, including temporal variance, frequency symmetry, and short-term dependency. The experimental results on keyword spotting and speech enhancement demonstrated that speech-MLP is highly effective: with much less parameters and computation, it can beat larger and more elaborately designed models including transformers.
Much work remains, for example, how to design a better chunking and context; how to make the model even smaller (e.g., removing unnecessary residual connections); how to trade off the complexity in chunks and in depth. The ultimate goal is to design a light-weighted, sufficiently powerful and generalizable component for speech feature extraction. We believe the knowledge-driven feature extractor benefits general speech processing tasks, such as speech recognition and understanding.
6 REPRODUCIBILITY STATEMENT
We made the following efforts to ensure that the results reported in the paper can be reproduced by other researchers.
• We will release the code on github, so everyone can download • The datasets used in this paper are all publicly available for researchers • We documented the required python environment and provided a step-by-step guidance for
the reproduction • We fixed the random seed in the code, so that others can reproduce our result exactly.
A APPENDIX A: DETAILS OF KWS EXPERIMENT
In this section, we present the details of the KWS experiment. We start with the system architecture, followed by the data preparation. We then present the training methods and the hyperparameters used in the experiments.
A.1 SYSTEM ARCHITECTURE
Prior to feature extraction, speech signals are resampled to 16 kHz if needed. Then, we use librosa4 to extract a 40-dimensional MFCC features. The parameters used to extract these features are presented in Table 5. Global mean and variance is also applied to normalize the extracted features. These statistics are calculated using the respective training set of each task. After that, the features are fed into the model shown in Figure 3.
Specifically, a linear transformation (Linear 0) operates on the normalized MFCC features, projecting them to 128-dimensional embeddings. These embeddings are then forwarded to stacked SpeechMLP blocks (4 blocks in our KWS study) to extract multiscale contextual representations. For each speech utterance, the last Speech-MLP block outputs a sequence of context-rich representations, and then a max pooling operation is adopted to aggregate this sequence to a single utterance-level representation. This representation is then passed to a 128× 128 linear transformation and a GELU nonlinear activation function. It is then further processed by a 128 ×M linear transformation and a softmax nonlinear activation, where M is the number of keywords. The final output of the above process is a vector that represents the posterior probabilities that the original speech utterance belongs to each keyword.
A.2 DATA PREPARATION
A.2.1 GOOGLE SPEECH COMMANDS
The google speech commands V2-35 contains 35 classes. The data can be obtained at the provider’s website5. There are 84, 843 training samples in total, with strictly no overlapping between training, validation and test sets.
4https://librosa.org/doc/latest/index.html 5http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
Data augmentation techniques have been used to increase the training data by 9 times. Combined with the original data, we have 848, 430 training samples in total. We fixed the random seed to be 59185 when producing the augmented samples. Following are the augmentation strategies adopted in this work:
• Noise perturbation: the noise perturbation script provided by the organizer of the DNS challenge is used to add background noise to clean speech6. The SNR factor is randomly sampled from [5, 10, 15] with equal probabilities;
• Time shifting: time shifting is applied in the time domain. It shifts the waveform by a timeshift factor t sampled from [−T, T ]. In our experiments we set T = 100. When t < 0, the waveform is shifted left by t samples and t zeros are padded to the right side. When t > 0, the waveform is shifted right by t samples and t zeros are padded to left side;
• Resampling: the resample function from scipy (scipy.signal.resample) is used to perform resampling augmentation, which changes the sampling rate slightly. Specifically, given a parameter R, a resampling factor r is drawn from [1−R, 1+R], and the augmented sample is obtained by changing the sampling rate to r×16000. R is set to 0.15 in our experiments.
6We use the segmental snr mixer function from https://github.com/microsoft/ DNS-Challenge/blob/master/audiolib.py
After the above augmentation, the original speech and the augmented speech are further corrupted by SpecAug (Park et al., 2019). The setting of SpecAug is shown in Table 5. Note that SpecAug does not enlarge the dataset.
A.2.2 LIBRIWORDS
The LirbriWords dataset is a larger and more complex dataset. The samples are extracted from the json files provided by the providers7. We follow the task definition of the dataset provider, and the details are given below.
• LibriWords 10 (LW-10): this task contains 10 keywords, including “the”, “and”, “of”, “to”, “a”, “in”, “he”, “I”, “that”, and “was”. There are 1, 750k samples in total, and they are spit to a training set (1, 400k), a validation set (262, 512) and a test set (87, 501).
• LibriWords 100 (LW-100): a more challenging task that contains 100 keywords. There are 1, 512k training samples, 189, 010 validation samples and 188, 968 samples, totalling 1, 890k samples.
• LibriWords 1000 (LW-1K): with increased difficult, this task contains 1000 keywords. The training set involves 2, 178k samples, and the validation set and the test set contain 272, 329 samples and 271, 858 samples respectively.
• LibriWords 10000 (LW-10K): The most challenging task presents 9998 keywords. The training set contains 2, 719k training samples, 339, 849 validation samples and 335, 046 test samples.
Given the large number of samples, data augmentation was not required for this task. We only performed SpecAug (Park et al., 2019) based on the settings presented in Table 5.
A.3 TRAINING PARAMETERS
The parameters used during training are specified in Table 5. Further details are presented below.
• the cross entropy between the model prediction and the ground truth is used as loss function;
• The optimizer used in all the experiments is AdamW. The initial learning rate is set to 0.01, and cosine annealing is applied to adjust the learning rate from 0.01 to 0.0001;
• Dropout is applied onto the residual connections within the speech-MLP block, with the dropout rate set to 0.1;
• Label smoothing is employed to prevent the over-confidence problem. The smoothing factor is set to 0.1;
• In the V2-35 experiment, the models are trained for 100 epochs and 10 epochs warmup is applied, In the LibriWords experiment, the models are trained for 20 epochs without warmup;
• In both the experiments, the model of each epoch is evaluated on the evaluation set, and the checkpoint that performs the best on the validation set is saved to report the performance on test set;
• We fix the random seed to be 123 in all the ablation study experiments, for the sake of reproducibility.
B APPENDIX B: DETAILS OF SE EXPERIMENT
B.1 SYSTEM ARCHITECTURE
The model architecture has been presented in Figure 4. The primary goal is to learn a mapping function that converts noisy magnitude spectrum to clean magnitude spectrum. The model output
7https://github.com/roman-vygon/triplet_loss_kws
predicts the soft ratio masks, that can be applied to the noisy magnitude spectrum to estimate the mangitude spectrum of the clean speech. Combining the denoised magnitude spectrum and the phase spectrum of the original noisy speech, one can attain the denoised waveform by inverse STFT.8
8We used the STFT class implemented in the torch-mfcc toolkit(https://github.com/echocatzh/ torch-mfcc).
More specifically, 257-dimensional log-magnitude spectrum is firstly extracted from the noisy speech as the acoustic features, following the configuration shown in Table 6. Then a linear layer follows and transfers the input features to 256-dimensional vectors PreX . The transformed feature vectors are then forwarded to 10 Speech-MLP blocks, and the output from the last block, denoted by PostX , involves multiscale contextual information. Afterwards, a residual connection adds PreX and PostX together, and instance normalization is applied to regulate temporal variance. Finally, another linear transform and a non-linear HardSigmoid activation projects the normalized feature to a masking space where the dimensionality is the same as the input feature, corresponding the ratio mask M ∈ [0, 1] on the noisy magnitude spectrum.
B.2 LOSS FUNCTIONS
The loss function of our model is computed based on the discrepancy between the denoised speech Xd and the clean speech Xc. The entire loss consists of two parts: (1) the distance on powercomposed magnitude spectrum, denoted by Lmag , and (2) the distance on power-compressed STFT, denoted by Lstft. We use a single frame to demonstrate this computation, where the real loss should compute the average of L on all the frames.
Dreal, Dimag = STFT (Xd)
Creal, Cimag = STFT (Xc) Dmag = √ D2real +Dimag 2
Cmag = √ C2real + Cimag 2
Lmag = (C 0.3 mag − C0.3mag)2
D0.3real = D0.3mag Dmag ×Dreal
D0.3imag = D0.3mag Dmag ×Dimag
C0.3real = C0.3mag Cmag × Creal
C0.3imag = C0.3mag Cmag × Cimag
Lstft = { (C0.3real −D0.3real)2 + (C0.3imag −D0.3imag)2 }2 L = 10× Lmag + Lstft
B.3 TRAINING PARAMETERS
The parameters for model training are summarized in Table 6. Specifically, the model was trained for 1000 epochs using the adamw optimizer. The initial learning was set to 0.01, and a cosine annealling learning scheduler was used to adjust the learning rate from 0.01 to 0.0001 in 3000 steps. Warmup was applied and involved 30 epochs . The model was evaluated on the evaluation set every epoch, and the best checkpoint (in terms of PESQ) on the evaluation set was saved. The results are reported in terms of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).9
C APPENDIX C: PSEUDO CODE FOR SPLIT & GLUE
9The evaluation script is pysepm (https://github.com/schmiph2/pysepm).
Algorithm 1 Pseudo code for Split & Glue Input Sequence: X ∈ RH×T : sequence of acoustic features of T frames and H dimensions Input Parameter: w = {w0, w1, ..., wK}: window sizes of the K chunks Input Parameter: p = {p0, p1, ..., pK}: padding definition for the K chunks Input Parameter: s: stride in context expansion Output Y ∈ RH×T : sequence of output features of T frames and H dimensions
Ensure: H%K = 0 {X1, ..., XK} = chunk(X,H,K) . Split X to K pieces on the channel dimension for k in range(K) do
Xkw = unfold(X k, wk, pk, s) . Context expansion by unfolding Y k = W kAX k w + b k A . Linear projection A for each chunk, where W k A = [Ĥ, w
k ×H/K] end for Y G = [Y 0;Y 1, ..., Y K ] . Concatenate Y k along channel dimension Y G = GELU(Y G)
Y = WBY + bB . Linear projection B to glue the chunks, where WB = [H,K × Ĥ] | 1. What is the focus of the paper regarding speech processing?
2. What are the strengths and weaknesses of the proposed MLP-based neural network architecture?
3. Do you have any concerns about the novel Split & Glue layer?
4. How does the reviewer assess the significance and limitations of the paper's contributions?
5. Are there any suggestions for improving the paper, such as including more surveys or discussing online/streaming capabilities? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes an MLP-based neural network, which is designed for speech processing. The method is an alternative architecture to a transformer encoder and is applied to several speech processing tasks (command recognition and speech enhancement). The novel Split & Glue layer is used to capture multi-resolution speech characteristics. The method achieved state-of-the-art performance in both command recognition and speech enhancement tasks.
Other comments
I don't think (Huang et al., 2020) is a representative work for transformer ASR. The following papers are more appropriate:
Dong, Linhao, Shuang Xu, and Bo Xu. "Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
Karita, Shigeki, et al. "A comparative study on transformer vs rnn in speech applications." 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019.
It's better to mention how it is simple with quantitative measures in the abstract and introduction (model size, computational cost, or the number of code lines).
the paper should refer conformer and discuss it. It becomes SOTA in many speech processing tasks now. Also, historically, many deep-learning-based speech processing methods were started from MLPs but the paper does not have enough surveys.
please discuss the online/streaming capabilities. This is an important function for speech processing.
Review
strengths
novel neural network architecture by revisiting an MLP
simple but effective architecture with fewer parameters than transformer.
shows the effectiveness in two different speech processing tasks (especially the speech enhancement task only with 600K parameters looks very strong).
weaknesses
the effectiveness of the Split & Glue layer is similar to the convolution operation with different kernel/stride sizes.
the method cannot be applied to a sequence-to-sequence task like ASR (but the method can be used as an encoder of seq2seq tasks).
although the performance is strong, the task is rather simple and limited. It does not attract a machine learning researcher in general.
the paper needs more surveys |
ICLR | Title
Speech-MLP: a simple MLP architecture for speech processing
Abstract
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
N/A
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
1 INTRODUCTION
As in many machine learning disciplines, speech processing is embracing more and more complex models, where transformer (Vaswani et al., 2017) is a particular example. It was first proposed to tackle machine translation, and afterwards was successfully applied to multiple research fields such as natural language processing (NLP) (Devlin et al., 2018) and computer vision (CV) (Dosovitskiy et al., 2020). The core of the transformer model is a self-attention mechanism, by which any two elements in a sequence can interact with each other, hence capturing long-range dependency. Considering that speech signals are naturally temporal-dependent, researchers in the speech community recently explored transformer-based models in multiple speech processing tasks, and remarkable performance was reported in speech recognition (Dong et al., 2018; Karita et al., 2019; Huang et al., 2020), speech enhancement (SE) (Kim et al., 2020; Fu et al., 2020), keyword spotting (KWS) (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021) and speech synthesis (Li et al., 2019). Recently, the conformer architecture, which combines convolution and self-attention, achieved excellent success in speech processing tasks and attracted much attention Gulati et al. (2020).
In this paper, we ask the following question: Do we need complex models such as transformers for certain speech processing tasks?
This question is closely related to the principle of ‘parsimony of explanations’, a.k.a., Occam’s razor (Walsh, 1979). According to this principle, if there is any possibility, we should seek the models that can represent the data with the least complexity (Rasmussen & Ghahramani, 2001; Blumer et al., 1987). However, in the public benchmark tests, complex and elaborately designed models are often ranked higher, due to the better reported performance. For example, the KWS benchmark on Google
speech command1 and the SE benchmark on VoiceBank+DEMAND2, transformer-based models are among the top ranks. Although the good performance is celebrating, the increased model complexity implies potential over-tuning and over-explanation, the risk that the Occam’s razor principle intends to avoid.
We, therefore, attempt to discover the simplest neural architecture, that is powerful enough to achieve comparable performance as the best existing models, in particular transformers, while eliminating unnecessary complexity. Our design is based on domain knowledge, in particular, three properties of speech signals: (1) temporal invariance, (2) frequency asymmetry, and (3) short-term dependency (Huang et al., 2001; Benesty et al., 2008; Furui, 2018). Based on these knowledge, we build the speech-MLP, a simple multi-layer perceptron (MLP) architecture, shown in Fig. 1. Besides the normalization components, the architecture involves simple linear transformations only. The core of the architecture is the Split & Glue layer, which splits the channel dimension into multiple chunks, processes each chunk separately, and finally merges the processed chunks in order to attain the output. Speech-MLP processes each time frames independently (compatible to temporal invariance), and the splitting & gluing procedure allows different treatments for different frequency bands (compatible to frequency asymmetry), and involves the local context of multiple scales (compatible to short-term dependency),
We tested the model on two speech processing tasks: keyword spotting with the Google speech command V2-35 and Libriword benchmark datasets; and speech enhancement with the VoiceBank benchmark dataset. Results showed that on both tasks the proposed speech-MLP outperforms complex models, in particular models based on transformers. Such results demonstrate that by utilizing domain knowledge and employing appropriate normalization techniques, it is possible to design simple yet powerful models. In some cases, these simple models even beat complex models on open benchmarks, where complex models are more likely to obtain good performance by careful tuning.
In summary, we proposed Speech-MLP, a simple yet effective neural model to represent speech signal. On the KWS and SE tasks, we demonstrated that the simple model can achieve performance comparable to or even better than transformers with less parameters and inference time. Our work shows that by taking domain-knowledge into account, it is possible to remove unnecessary complexity (e.g., modeling for the long-range dependency in KWS and SE) in model design, as advocated by the Occam’s razor.
2 RELATED WORK
Recent research has shown that a simple model can be as effective as complex and task specific models such as transformers in some important tasks. In (Tolstikhin et al., 2021), for example, the authors proposed a simple architecture for vision, namely MLP-Mixer. The model receives a sequence of image patches and performs channel-wise and patch-wise linear projection alternatively and iteratively. Without using convolutions or self-attention, the Mixer architecture separates the per-location (channel-mixing) and cross-location (token-mixing) operations (Tolstikhin et al., 2021). While the channel-mixing MLPs enable communication between different channels, the token-mixing MLPs allow communication between different spatial locations (tokens). Tested on image classification benchmarks, MLP-Mixer achieved performance comparable to SOTA models, in particular the vision transformer model (Tolstikhin et al., 2021).
In another recent work (Liu et al., 2021), the authors investigated the need of the self-attention mechanism in transformers, proposing an alternative MLP-based architecture, namely gMLP. The model, based on MLP layers with gating, consists of a stack of L identical blocks. Each block comprises a normalization layer, a channel projection, followed by an activation function and a spatial gating unit, followed by another channel projection (Liu et al., 2021). It achieves similar performance when compared to the vision transformer model (Touvron et al., 2021b), being 3 % more accurate than the aforementioned MLP-mixter model with 66 % fewer parameters. The model was also successful on language modeling in the BERT setup (Liu et al., 2021), minimizing perplexity as well as Transformers. The authors also found that perplexity reduction was more influenced by the model capacity than by the attention mechanism.
1https://paperswithcode.com/sota/keyword-spotting-on-google-speech-commands 2https://paperswithcode.com/sota/speech-enhancement-on-demand
Inspired by vision transformers (Touvron et al., 2021b)(Dosovitskiy et al., 2020), in (Touvron et al., 2021a), the authors apply the skip connection technique from ResNet’s to MLP layers and propose the so-called Residual Multi-Layer Perceptrons (ResMLP). The model receives non-overlapping image patches, typically 16 × 16. These patches go through a linear transformation in order to attain d-dimensional embeddings. The embeddings are then fed to a sequence of ResMLP blocks to produce a set of d-dimensional output embeddings. An average pooling is applied on the ddimension output vector to represent the image, a linear classifier is used then to predict the label associated with the image (Touvron et al., 2021a).
Differently from Mixer-MLP, gMLP and ResMLP, CycleMLP can process inputs of arbitrary resolution with linear computational complexity as its receptive fields are enlarged for context aggregation (Chen et al., 2021). The model is based on Cycle Fully-Connected Layer (Cycle FC), serving as a generic, plug-and-play transformer-free architecture. Results show CycleMLP outperforming existing MLP-like models on ImageNet classification, achieving good performance on object detection, instance segmentation and semantic segmentation (Chen et al., 2021).
The aforementioned research highlights that, despite their success, convolution and self-attention mechanisms are not mandatory for some CV and NLP tasks, and can be replaced by simpler layers such as MLP with a customized design. Although typical convolution operations are not used by these MLP solutions (but rather 1 × 1 convolution as pointed out in (Chen et al., 2021) and (Tolstikhin et al., 2021)), these MLP approaches are inspired by CNN architectures for computer vision related tasks. Their building block, nonetheless, is similar and based on applying linear transformation on spatial locations and feature channels.
Although inspired by these new MLP architectures, speech-MLP focuses on speech signals rather than images. This implies in processing different input resolutions given the nature of the input signal. The split & glue layer is very similar to a separable CNN (Chen et al., 2018), if we regard the frame-independent processing as 1-D convolution in time. In particular, it is essentially a group-wised CNN (Romero et al., 2020) with different kernels for each group. However, from the perspective of feature learning, the entire split & glue is an MLP if our focus is a particular frame (within a context). That is why a 1-D convolution is often called a time-delay neural net (TDNN) (Waibel et al., 1989). We follow this convention and name our structure as speech-MLP.
A key motivation of the speech-MLP structure is to respect the properties of speech signals. It should be emphasized that almost all successful techniques in speech processing take these properties into account, for instance the hidden Markov model (HMM) assumes short-term dependency (Rabiner & Juang, 1986), TDNN assumes temporal invariance (Waibel et al., 1989), and frequency asymmetry is explicitly implemented in the famous MFCC feature (Mermelstein, 1976). In this paper, the role of knowledge of speech signals is to help remove unnecessary complexity, i.e., seeking the minimum structure that make reflect these basic properties.
Finally, MLP is not new in speech processing; in fact the neural models used in early days in speech processing are all general MLPs, e.g., (Bourlard & Morgan, 2012). Speech-MLP is a special designed MLP, by taking the properties of speech signals into account.
3 METHODOLOGY
Our model, referred to as speech-MLP, is presented in Figure 1. Note that for a given speech waveform, a sequence of acoustic features, denoted by X = {x1, x2, ..., xn}, are first extracted. These features are then fed into N stacked speech-MLP blocks and the output of the last speech-MLP block is a speech representation that needs to undergo task-specific layers in order to perform specific tasks, such as the ones addressed in this study: SE and KWS.
Inside of each speech-MLP block, there are three components: (1) a linear transformation for a pre-projection of the extracted acoustic features; (2) a Split & Glue layer for processing the projected acoustic features while addressing frequency asymmetry and temporal dependency, and (3) another linear transformation for post-projection of the final representation. Two residual connections are also adopted to encourage gradient propagation. The first one maps the input features onto the output of the last linear transformation (i.e., the output of the post-projection operation). The second residual connection maps the output of the first linear transformation (i.e., the output of the pre-projection operation) onto the output of the Split & Glue layer. Note that normalization tech-
niques are also applied to regulate the feature distribution (by layer norm) and temporal variance (by instance norm). In the next section, we give more details on the Split & Glue layer, followed by a discussion on the normalization methods adopted in this work.
3.1 SPLIT & GLUE
Figure 2 depicts how the Split & Glue layer operates. The sequence of acoustic features is denoted by X ∈ RH×T , with T and H being, respectively, the length and the number of channels of the input sequence. The first step is to split X into K non-overlapping chunks, as illustrated in both Figure 1 and Figure 2. The split referred to as X → {X1, .., Xk, .., XK}, is performed along the channel dimension. In our experiments, the channel dimension of each chunk is considered the same, leading to Xk ∈ RH/K×T . For each chunk, Xk, a context expansion is then performed through the so-called unfolding operations. This results in context-expanded chunks, denoted by Xkw ∈ Rw kH/K×T , where wk is the size of the context window induced by the unfolding operation.
Note that the number of chunks K and the window size wk can be arbitrarily selected for each chunk. This flexibility allows us to represent multi-scale contexts by adopting different window sizes for different chunks. In Figure 2, for instance, the input channels are split into two chunks, and the window sizes are set to 3 and 5, respectively. This leads to the model learning from small and large contexts simultaneously.
The unfolded chunk Xkw is projected by a linear transformation, leading to a new representation for the initial chunk, Y k ∈ RĤ×T , where Ĥ could be set arbitrary and is called the number of Glue channels. We highlight that the linear transformation used in the above chunk-wise operation is shared across all the time steps for a single chunk, and each time frame is processed independently. This setting reduces the number of parameters and is compatible with the temporal invariance property of speech signals. Nevertheless, different weight parameters are adopted for different chunks, to provide sufficient flexibility.
Finally, all the learned speech representations, Y i, are concatenated along the channel dimension, forming a glued feature matrix Y G = {Y 1, Y 2, ..., Y K}. Following, another linear transformation
is applied in order to obtain the output feature Y ∈ RH×T . Again, the linear transformation is shared across all the time steps, to reflect temporal invariance.
3.2 NORMALIZATIONS
Normalization plays an important role in our speech-MLP model. We employed two normalization approaches: (1) layer normalization (LN) (Ba et al., 2016) and (2) instance normalization (IN) (Ulyanov et al., 2016).
Layer normalization is applied across the channel dimension at each time step. Thus, it computes statistics (mean and variance) on each column of X ∈ RH×T , and then uses these statistics to normalize the elements in the same column. With this normalization technique, the distribution of the feature vector at each time step is regularized.
Instance normalization is used to perform per-channel normalization. That is, the statistics are computed on each row of X ∈ RH×T and applied across the time steps to normalize the elements of each row. Thus, the temporal variation of each channel is normalized. Note that IN extends the conventional cepstral mean normalization (CMN) approach (Liu et al., 1993), by normalizing not only acoustic features, but also features produced by any hidden layer.
Empirically, we found that IN was only effective for the SE task while the LN was more important for the KWS task. Therefore, we apply LN only for KWS and IN for SE.
4 EXPERIMENTS
We evaluate the proposed speech-MLP model in two speech processing tasks: speech enhancement and keyword spotting. In this section, we introduce these tasks and their respective datasets, used in our experiments, followed by experimental settings, experimental results, and the ablation study.3
3The code will be available on github. To respect the double-blind review, the link will be sent to the reviewers when the discussion is open.
4.1 KEYWORD SPOTTING
Keyword spotting aims at detecting predefined words in speech utterances (Szöke et al., 2005; Mamou et al., 2007; Wang, 2010; Mandal et al., 2014). In our experiments, we explore two KWS datasets: (1) the Google speech commands V2 dataset (Warden, 2018), and (2) the LibriWords (Vygon & Mikhaylovskiy, 2021). The Google speech commands V2 dataset (here, referred to as V2-35) consists of 105, 829 utterances of 35 words, recorded by 2,618 speakers. The training, validation and test sets contain 84, 843, 11, 005 and 9, 981 utterances respectively. The LibriWords dataset, larger and more complex, is derived from 1000-hours of English speech from the LibriSpeech dataset (Panayotov et al., 2015). Signal-to-word alignments were generated using the Montreal Forced Aligner (McAuliffe et al., 2017) and are available in (Lugosch et al., 2019). The averaged duration of the keywords are 0.28 seconds. The provider defined four benchmark tests, based on the number of target keywords: LW-10, LW-100, LW-1K and LW-10K, where the target keywords are 10, 100, 1k and 10k respectively. More details on this dataset are presented in Appendix.
4.1.1 SETTINGS
We used the same architecture in all the KWS tasks, except that the dimension of the output layer was adapted to the number of keywords, as shown in Table 1. Note that we set the window size w to be {3, 7, 9, 11}. This allows us to exploit multi-scale contexts. Additionally, we set the stride to be 1 and appropriately set the padding list p to ensure that all the expanded features are in the same length and equal to that of the input feature.
Prior to the feature extraction step, each speech recording is resampled to 16 kHz. Then, 40- dimensional Mel-Frequency Cepstral Coefficients (MFCC) are attained as the acoustic features. The MFCC features are then projected target dimensional feature vector by a linear layer and then forwarded to speech-MLP blocks. The output features are then passed through a max-pooling operation collects the information across time steps. Finally, two linear layers with a GELU activation function in the middle and a softmax activation are employed in order to attain the posterior probabilities that the input speech belongs to each keyword. For regularization we used SpecAugment (Park et al., 2019), dropout (Baldi & Sadowski, 2013), and label smoothing (Müller et al., 2019) were used to prevent overfitting.
Three model architectures have been verified in all the experiments: a 180k small model denoted by Speech-MLP-S, a 480k large model denoted by Speech-MLP-L, and a 2375K extra large model
denoted by Speech-MLP-XL. The three models are different in the number of channels of the hidden layer (i.e., after the pre-projection) and the channels within the Split & Glue block (i.e., channels after Linear A, and layers in Fig. 2), as shown in Table 1.
For the experiments on the Google speech commands dataset, we applied the following data augmentation techniques: time shifting, audio re-sampling and noise perturbation: as in (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021). After augmentation, the data was increased to 10 times the size of V2-35. We set the batch size to be 256 and trained the model for 100 epochs on 4 cards V100 Nvidia GPU.
For the experiments on the LibriWords, the batch size was set to 1024, and we trained the model for 20 epochs on 2 cards V100 Nvidia GPU which showed to be enough for this dataset. The training schemes were set differently simply because Libriwords is huge and long-term training is not economic.
The performance of the proposed model is compared to three benchmarks. The first one referred to as Att-RNN, is a CNN-LSTM architecture with the attention mechanism introduced in (de Andrade et al., 2018). The model has approximately 202k trainable parameters and attains reasonable performance. Another recent solution, based on a transformer architecture is adopted as the second benchmark (Berg et al., 2021). We refer to this benchmark as KWT-K where K refers to different size of models. Res15 (Vygon & Mikhaylovskiy, 2021), another recent work based on ResNet reports high performance on both V2-35 and Libriwords. The authors reported results with two configurations, one trained by cross entropy (Res15-CE) and the other based on triple loss (Res15-TL). We use them as the third benchmark.
4.1.2 RESULTS
Table 2 presents the results of the benchmarks discussed in the previous section and the performance of the proposed Speech-MLP, the experimental results are presented by mean value and 95% confidence of 5 trials with different random seeds on V2-35. It can be observed that the Speech-MLP models outperform all the benchmarks with comparable model sizes. Note that the small version of speech-MLP, which contains less than half of the parameters of its large version, can still maintain reasonable performance, providing higher accuracy than most benchmarks. The performance of our solution on the Libriword dataset is even more significant. It outperforms Res15-CE and Res15-TL while being able to maintain performance across all LibriWord dataset sizes. Our conjecture is that by the knowledge-driven design, we can use the parameters more efficiently, which allows for the use of smaller models to handle large-scale tasks.
4.1.3 ABLATION STUDY
To investigate how each module impacts the performance of speech-MLP, we conducted an ablation study, in order to fair compare each model we use fixed random seed 123 in all ablation study experiments, we show that window list to {3} equivalent to use TDNN with kernel size to 3, and window list to {3, 3, 3, 3} equivalent the TDNN with 4 groups convolution operation with kernel size
to 3 in split & glue layer, and our proposed speech-MLP with a variance of window sizes outperform these existing solutions. We particularly focus on the chunk splitting, specially the number of chunks and the context window of each chunk. They are the only hyperparameters that we need to design in speech-MLP, by using domain knowledge.
The results are reported in Table 3. It can be observed that the setting for the number of chunks and the context window does matter. A longer context window is clearly beneficial, and setting different context windows for different chunks can further improve the performance. This confirms our conjecture that contextual information is important for representing speech signals, and exploiting multi-scale contextual information is especially important.
An interesting comparison is between the Speech-MLP-S model with window {3, 7, 9, 11} and the Speech-MLP-L model with window {1}. The parameters of the two models are comparable, but the latter model does not involve any chunk splitting and context expansion. The clear advantage of the Speech-MLP-S model demonstrated that the performance improvement with larger and multi-scale context windows (ref. performance of Speech-MLP-S or Speech-MLP-L with different windows) is due to the newly designed Split& Glue structure, rather than the increase in parameters. This in turn demonstrated the value of domain knowledge: if we can exploit it appropriately, it is possible to design very parsimonious models.
4.2 SPEECH ENHANCEMENT
Speech enhancement, which aims at inferring clean speech from its corrupted version (Benesty et al., 2006; Loizou, 2007; Das et al., 2020), is another fundamental task used to evaluate our model. We choose the Voicebank+Demand datasetValentini-Botinhao et al. (2016) to perform the SE test. It contains clean speech signals from the Voicebank dataset, includes 28 speakers for training and 2 speakers for testing. Noise signals of 40 types from the DEMAND atabase Thiemann et al. (2013) were selected and were mixed into the clean speech. After the mixing, the training set and testing set involve 11,572 and 824 clips respectively. We split the training utterances into segments of 3 seconds without overlap. This resulted into 17,989 training samples, each sampling consisting of a noise corrupted segment and the corresponding clean segment. The goal of SE is to learn a mapping function that converts a noisy segment to a clean segment.
4.2.1 SETTINGS
The architecture of our SE model is shown in Table 1. As input, the model receives a 257- dimensional log-magnitude spectrum. The extracted features are first projected by a linear layer and reduced to 256-dimensional feature vector, which are then forwarded to 10 stacked speechMLP blocks. The output from the last speech-MLP block is re-projected to 257-dimensional feature vector. After a hard-sigmoid function Courbariaux et al. (2015), the value of the output units correspond to the ratio masks on the 257-dimensional input log-magnitude spectrum. The clean speech signal is estimated by applying the ratio masks onto the noisy spectrum and reusing the noisy phase.
More details of the settings can be found in Appendix. The performance of the proposed model is compared to six benchmarks. Note that we focus on models trained without extra data, or extra models for knowledge distillation. The reader can find details on these enhancement methods in the references presented in Table 4. Following the convention on this test set, we report the results of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).
4.2.2 RESULTS
The results are shown in Table 4, where we choose 6 baseline systems for comparison. Among these systems, T-GAS (Kim et al., 2020) is based on a transformer model. Similar to speech-MLP, the authors of T-GAS also noticed the importance of local context and designed an annealing approach to encourage attention on neighbour frames. However the attention is still global in nature, and the improvement with T-GAS was still attributed to the capacity of transformers in learning (not so) long-range dependency. Note that the size of the T-GAS model was not reported in the original paper, so we made an estimation according to the structure description.
The results shown in Table 4 demonstrated that our speech-MLP model outperformed all the six baselines. In particular, without modeling any long-range dependency, it outperformed T-GSA by almost 100 times smaller of model size. This comparative results challenge the assumption that the better performance of T-GSA over other baselines is due to its capacity of capturing long-range dependence in speech. Moreover, the model size of Speech-MLP is much smaller than T-GSA, and due to the concise architecture, the training is simple and fast. It provides a strong support for our argument that complex models are not necessarily the best, and a knowledge-based model may easily beat complex models with parsimonious parameters.
5 CONCLUSIONS
In this paper, we propose the speech-MLP model, a simple MLP architecture for speech processing tasks. Our main motivation was to find a compact solution that eliminates unnecessary complexity while being able to capture essential information from speech signals. By utilizing domain knowledge of speech, we designed a simple yet effective structure that involves only linear transform and normalization. The main ingredient is a split & glue structure, which splits input features into multiple chunks and makes them accounting for different contexts. This knowledge-based design reflects several properties of speech signals, including temporal variance, frequency symmetry, and short-term dependency. The experimental results on keyword spotting and speech enhancement demonstrated that speech-MLP is highly effective: with much less parameters and computation, it can beat larger and more elaborately designed models including transformers.
Much work remains, for example, how to design a better chunking and context; how to make the model even smaller (e.g., removing unnecessary residual connections); how to trade off the complexity in chunks and in depth. The ultimate goal is to design a light-weighted, sufficiently powerful and generalizable component for speech feature extraction. We believe the knowledge-driven feature extractor benefits general speech processing tasks, such as speech recognition and understanding.
6 REPRODUCIBILITY STATEMENT
We made the following efforts to ensure that the results reported in the paper can be reproduced by other researchers.
• We will release the code on github, so everyone can download • The datasets used in this paper are all publicly available for researchers • We documented the required python environment and provided a step-by-step guidance for
the reproduction • We fixed the random seed in the code, so that others can reproduce our result exactly.
A APPENDIX A: DETAILS OF KWS EXPERIMENT
In this section, we present the details of the KWS experiment. We start with the system architecture, followed by the data preparation. We then present the training methods and the hyperparameters used in the experiments.
A.1 SYSTEM ARCHITECTURE
Prior to feature extraction, speech signals are resampled to 16 kHz if needed. Then, we use librosa4 to extract a 40-dimensional MFCC features. The parameters used to extract these features are presented in Table 5. Global mean and variance is also applied to normalize the extracted features. These statistics are calculated using the respective training set of each task. After that, the features are fed into the model shown in Figure 3.
Specifically, a linear transformation (Linear 0) operates on the normalized MFCC features, projecting them to 128-dimensional embeddings. These embeddings are then forwarded to stacked SpeechMLP blocks (4 blocks in our KWS study) to extract multiscale contextual representations. For each speech utterance, the last Speech-MLP block outputs a sequence of context-rich representations, and then a max pooling operation is adopted to aggregate this sequence to a single utterance-level representation. This representation is then passed to a 128× 128 linear transformation and a GELU nonlinear activation function. It is then further processed by a 128 ×M linear transformation and a softmax nonlinear activation, where M is the number of keywords. The final output of the above process is a vector that represents the posterior probabilities that the original speech utterance belongs to each keyword.
A.2 DATA PREPARATION
A.2.1 GOOGLE SPEECH COMMANDS
The google speech commands V2-35 contains 35 classes. The data can be obtained at the provider’s website5. There are 84, 843 training samples in total, with strictly no overlapping between training, validation and test sets.
4https://librosa.org/doc/latest/index.html 5http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
Data augmentation techniques have been used to increase the training data by 9 times. Combined with the original data, we have 848, 430 training samples in total. We fixed the random seed to be 59185 when producing the augmented samples. Following are the augmentation strategies adopted in this work:
• Noise perturbation: the noise perturbation script provided by the organizer of the DNS challenge is used to add background noise to clean speech6. The SNR factor is randomly sampled from [5, 10, 15] with equal probabilities;
• Time shifting: time shifting is applied in the time domain. It shifts the waveform by a timeshift factor t sampled from [−T, T ]. In our experiments we set T = 100. When t < 0, the waveform is shifted left by t samples and t zeros are padded to the right side. When t > 0, the waveform is shifted right by t samples and t zeros are padded to left side;
• Resampling: the resample function from scipy (scipy.signal.resample) is used to perform resampling augmentation, which changes the sampling rate slightly. Specifically, given a parameter R, a resampling factor r is drawn from [1−R, 1+R], and the augmented sample is obtained by changing the sampling rate to r×16000. R is set to 0.15 in our experiments.
6We use the segmental snr mixer function from https://github.com/microsoft/ DNS-Challenge/blob/master/audiolib.py
After the above augmentation, the original speech and the augmented speech are further corrupted by SpecAug (Park et al., 2019). The setting of SpecAug is shown in Table 5. Note that SpecAug does not enlarge the dataset.
A.2.2 LIBRIWORDS
The LirbriWords dataset is a larger and more complex dataset. The samples are extracted from the json files provided by the providers7. We follow the task definition of the dataset provider, and the details are given below.
• LibriWords 10 (LW-10): this task contains 10 keywords, including “the”, “and”, “of”, “to”, “a”, “in”, “he”, “I”, “that”, and “was”. There are 1, 750k samples in total, and they are spit to a training set (1, 400k), a validation set (262, 512) and a test set (87, 501).
• LibriWords 100 (LW-100): a more challenging task that contains 100 keywords. There are 1, 512k training samples, 189, 010 validation samples and 188, 968 samples, totalling 1, 890k samples.
• LibriWords 1000 (LW-1K): with increased difficult, this task contains 1000 keywords. The training set involves 2, 178k samples, and the validation set and the test set contain 272, 329 samples and 271, 858 samples respectively.
• LibriWords 10000 (LW-10K): The most challenging task presents 9998 keywords. The training set contains 2, 719k training samples, 339, 849 validation samples and 335, 046 test samples.
Given the large number of samples, data augmentation was not required for this task. We only performed SpecAug (Park et al., 2019) based on the settings presented in Table 5.
A.3 TRAINING PARAMETERS
The parameters used during training are specified in Table 5. Further details are presented below.
• the cross entropy between the model prediction and the ground truth is used as loss function;
• The optimizer used in all the experiments is AdamW. The initial learning rate is set to 0.01, and cosine annealing is applied to adjust the learning rate from 0.01 to 0.0001;
• Dropout is applied onto the residual connections within the speech-MLP block, with the dropout rate set to 0.1;
• Label smoothing is employed to prevent the over-confidence problem. The smoothing factor is set to 0.1;
• In the V2-35 experiment, the models are trained for 100 epochs and 10 epochs warmup is applied, In the LibriWords experiment, the models are trained for 20 epochs without warmup;
• In both the experiments, the model of each epoch is evaluated on the evaluation set, and the checkpoint that performs the best on the validation set is saved to report the performance on test set;
• We fix the random seed to be 123 in all the ablation study experiments, for the sake of reproducibility.
B APPENDIX B: DETAILS OF SE EXPERIMENT
B.1 SYSTEM ARCHITECTURE
The model architecture has been presented in Figure 4. The primary goal is to learn a mapping function that converts noisy magnitude spectrum to clean magnitude spectrum. The model output
7https://github.com/roman-vygon/triplet_loss_kws
predicts the soft ratio masks, that can be applied to the noisy magnitude spectrum to estimate the mangitude spectrum of the clean speech. Combining the denoised magnitude spectrum and the phase spectrum of the original noisy speech, one can attain the denoised waveform by inverse STFT.8
8We used the STFT class implemented in the torch-mfcc toolkit(https://github.com/echocatzh/ torch-mfcc).
More specifically, 257-dimensional log-magnitude spectrum is firstly extracted from the noisy speech as the acoustic features, following the configuration shown in Table 6. Then a linear layer follows and transfers the input features to 256-dimensional vectors PreX . The transformed feature vectors are then forwarded to 10 Speech-MLP blocks, and the output from the last block, denoted by PostX , involves multiscale contextual information. Afterwards, a residual connection adds PreX and PostX together, and instance normalization is applied to regulate temporal variance. Finally, another linear transform and a non-linear HardSigmoid activation projects the normalized feature to a masking space where the dimensionality is the same as the input feature, corresponding the ratio mask M ∈ [0, 1] on the noisy magnitude spectrum.
B.2 LOSS FUNCTIONS
The loss function of our model is computed based on the discrepancy between the denoised speech Xd and the clean speech Xc. The entire loss consists of two parts: (1) the distance on powercomposed magnitude spectrum, denoted by Lmag , and (2) the distance on power-compressed STFT, denoted by Lstft. We use a single frame to demonstrate this computation, where the real loss should compute the average of L on all the frames.
Dreal, Dimag = STFT (Xd)
Creal, Cimag = STFT (Xc) Dmag = √ D2real +Dimag 2
Cmag = √ C2real + Cimag 2
Lmag = (C 0.3 mag − C0.3mag)2
D0.3real = D0.3mag Dmag ×Dreal
D0.3imag = D0.3mag Dmag ×Dimag
C0.3real = C0.3mag Cmag × Creal
C0.3imag = C0.3mag Cmag × Cimag
Lstft = { (C0.3real −D0.3real)2 + (C0.3imag −D0.3imag)2 }2 L = 10× Lmag + Lstft
B.3 TRAINING PARAMETERS
The parameters for model training are summarized in Table 6. Specifically, the model was trained for 1000 epochs using the adamw optimizer. The initial learning was set to 0.01, and a cosine annealling learning scheduler was used to adjust the learning rate from 0.01 to 0.0001 in 3000 steps. Warmup was applied and involved 30 epochs . The model was evaluated on the evaluation set every epoch, and the best checkpoint (in terms of PESQ) on the evaluation set was saved. The results are reported in terms of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).9
C APPENDIX C: PSEUDO CODE FOR SPLIT & GLUE
9The evaluation script is pysepm (https://github.com/schmiph2/pysepm).
Algorithm 1 Pseudo code for Split & Glue Input Sequence: X ∈ RH×T : sequence of acoustic features of T frames and H dimensions Input Parameter: w = {w0, w1, ..., wK}: window sizes of the K chunks Input Parameter: p = {p0, p1, ..., pK}: padding definition for the K chunks Input Parameter: s: stride in context expansion Output Y ∈ RH×T : sequence of output features of T frames and H dimensions
Ensure: H%K = 0 {X1, ..., XK} = chunk(X,H,K) . Split X to K pieces on the channel dimension for k in range(K) do
Xkw = unfold(X k, wk, pk, s) . Context expansion by unfolding Y k = W kAX k w + b k A . Linear projection A for each chunk, where W k A = [Ĥ, w
k ×H/K] end for Y G = [Y 0;Y 1, ..., Y K ] . Concatenate Y k along channel dimension Y G = GELU(Y G)
Y = WBY + bB . Linear projection B to glue the chunks, where WB = [H,K × Ĥ] | 1. What is the main contribution of the paper in terms of speech signal processing?
2. What are the strengths of the proposed architecture, particularly in capturing local temporal dependencies?
3. What are the weaknesses of the paper regarding the ablation study and the choice of parameters?
4. Do you have any suggestions for improving the architecture or exploring different approaches?
5. Are there any minor issues or typos in the paper that could be corrected? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a simple architecture based on the multi-layer perceptron for extracting information from speech signals. The architecture is based on a new layer called split and glue and can capture multi-scale local temporal dependecies. It is evaluated on two different problems, keyword spotting and speech enhancement and achieves state-of-the-art performance.
Review
The paper is very well written, it is clearly motivated and presents impressive results with such a simple architecture. If the code becomes publicly available as the authors promised it will be very useful for the research community.
The ablation study is useful but it would be more complete if additional parameters were investigated, e.g., number of blocks. In Table 1 they are set to 4 and 10 somehow arbitrarily. In Table 3, more window sizes can be considered, e.g., 7 and 9. It is also not clear how the number of chunks impacts performance in Table 3.
Why not considering some intermediate architectures between small and large? It would be useful to report resorts for a M(edium) architecture as well.
Also have the authors considered other ways to create the chunks, e.g., across the time dimension?
Some typos Appendix B.3 Specificaaly A.2.2 closing quotation marks are used as opening quotation marks |
ICLR | Title
Speech-MLP: a simple MLP architecture for speech processing
Abstract
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
N/A
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
1 INTRODUCTION
As in many machine learning disciplines, speech processing is embracing more and more complex models, where transformer (Vaswani et al., 2017) is a particular example. It was first proposed to tackle machine translation, and afterwards was successfully applied to multiple research fields such as natural language processing (NLP) (Devlin et al., 2018) and computer vision (CV) (Dosovitskiy et al., 2020). The core of the transformer model is a self-attention mechanism, by which any two elements in a sequence can interact with each other, hence capturing long-range dependency. Considering that speech signals are naturally temporal-dependent, researchers in the speech community recently explored transformer-based models in multiple speech processing tasks, and remarkable performance was reported in speech recognition (Dong et al., 2018; Karita et al., 2019; Huang et al., 2020), speech enhancement (SE) (Kim et al., 2020; Fu et al., 2020), keyword spotting (KWS) (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021) and speech synthesis (Li et al., 2019). Recently, the conformer architecture, which combines convolution and self-attention, achieved excellent success in speech processing tasks and attracted much attention Gulati et al. (2020).
In this paper, we ask the following question: Do we need complex models such as transformers for certain speech processing tasks?
This question is closely related to the principle of ‘parsimony of explanations’, a.k.a., Occam’s razor (Walsh, 1979). According to this principle, if there is any possibility, we should seek the models that can represent the data with the least complexity (Rasmussen & Ghahramani, 2001; Blumer et al., 1987). However, in the public benchmark tests, complex and elaborately designed models are often ranked higher, due to the better reported performance. For example, the KWS benchmark on Google
speech command1 and the SE benchmark on VoiceBank+DEMAND2, transformer-based models are among the top ranks. Although the good performance is celebrating, the increased model complexity implies potential over-tuning and over-explanation, the risk that the Occam’s razor principle intends to avoid.
We, therefore, attempt to discover the simplest neural architecture, that is powerful enough to achieve comparable performance as the best existing models, in particular transformers, while eliminating unnecessary complexity. Our design is based on domain knowledge, in particular, three properties of speech signals: (1) temporal invariance, (2) frequency asymmetry, and (3) short-term dependency (Huang et al., 2001; Benesty et al., 2008; Furui, 2018). Based on these knowledge, we build the speech-MLP, a simple multi-layer perceptron (MLP) architecture, shown in Fig. 1. Besides the normalization components, the architecture involves simple linear transformations only. The core of the architecture is the Split & Glue layer, which splits the channel dimension into multiple chunks, processes each chunk separately, and finally merges the processed chunks in order to attain the output. Speech-MLP processes each time frames independently (compatible to temporal invariance), and the splitting & gluing procedure allows different treatments for different frequency bands (compatible to frequency asymmetry), and involves the local context of multiple scales (compatible to short-term dependency),
We tested the model on two speech processing tasks: keyword spotting with the Google speech command V2-35 and Libriword benchmark datasets; and speech enhancement with the VoiceBank benchmark dataset. Results showed that on both tasks the proposed speech-MLP outperforms complex models, in particular models based on transformers. Such results demonstrate that by utilizing domain knowledge and employing appropriate normalization techniques, it is possible to design simple yet powerful models. In some cases, these simple models even beat complex models on open benchmarks, where complex models are more likely to obtain good performance by careful tuning.
In summary, we proposed Speech-MLP, a simple yet effective neural model to represent speech signal. On the KWS and SE tasks, we demonstrated that the simple model can achieve performance comparable to or even better than transformers with less parameters and inference time. Our work shows that by taking domain-knowledge into account, it is possible to remove unnecessary complexity (e.g., modeling for the long-range dependency in KWS and SE) in model design, as advocated by the Occam’s razor.
2 RELATED WORK
Recent research has shown that a simple model can be as effective as complex and task specific models such as transformers in some important tasks. In (Tolstikhin et al., 2021), for example, the authors proposed a simple architecture for vision, namely MLP-Mixer. The model receives a sequence of image patches and performs channel-wise and patch-wise linear projection alternatively and iteratively. Without using convolutions or self-attention, the Mixer architecture separates the per-location (channel-mixing) and cross-location (token-mixing) operations (Tolstikhin et al., 2021). While the channel-mixing MLPs enable communication between different channels, the token-mixing MLPs allow communication between different spatial locations (tokens). Tested on image classification benchmarks, MLP-Mixer achieved performance comparable to SOTA models, in particular the vision transformer model (Tolstikhin et al., 2021).
In another recent work (Liu et al., 2021), the authors investigated the need of the self-attention mechanism in transformers, proposing an alternative MLP-based architecture, namely gMLP. The model, based on MLP layers with gating, consists of a stack of L identical blocks. Each block comprises a normalization layer, a channel projection, followed by an activation function and a spatial gating unit, followed by another channel projection (Liu et al., 2021). It achieves similar performance when compared to the vision transformer model (Touvron et al., 2021b), being 3 % more accurate than the aforementioned MLP-mixter model with 66 % fewer parameters. The model was also successful on language modeling in the BERT setup (Liu et al., 2021), minimizing perplexity as well as Transformers. The authors also found that perplexity reduction was more influenced by the model capacity than by the attention mechanism.
1https://paperswithcode.com/sota/keyword-spotting-on-google-speech-commands 2https://paperswithcode.com/sota/speech-enhancement-on-demand
Inspired by vision transformers (Touvron et al., 2021b)(Dosovitskiy et al., 2020), in (Touvron et al., 2021a), the authors apply the skip connection technique from ResNet’s to MLP layers and propose the so-called Residual Multi-Layer Perceptrons (ResMLP). The model receives non-overlapping image patches, typically 16 × 16. These patches go through a linear transformation in order to attain d-dimensional embeddings. The embeddings are then fed to a sequence of ResMLP blocks to produce a set of d-dimensional output embeddings. An average pooling is applied on the ddimension output vector to represent the image, a linear classifier is used then to predict the label associated with the image (Touvron et al., 2021a).
Differently from Mixer-MLP, gMLP and ResMLP, CycleMLP can process inputs of arbitrary resolution with linear computational complexity as its receptive fields are enlarged for context aggregation (Chen et al., 2021). The model is based on Cycle Fully-Connected Layer (Cycle FC), serving as a generic, plug-and-play transformer-free architecture. Results show CycleMLP outperforming existing MLP-like models on ImageNet classification, achieving good performance on object detection, instance segmentation and semantic segmentation (Chen et al., 2021).
The aforementioned research highlights that, despite their success, convolution and self-attention mechanisms are not mandatory for some CV and NLP tasks, and can be replaced by simpler layers such as MLP with a customized design. Although typical convolution operations are not used by these MLP solutions (but rather 1 × 1 convolution as pointed out in (Chen et al., 2021) and (Tolstikhin et al., 2021)), these MLP approaches are inspired by CNN architectures for computer vision related tasks. Their building block, nonetheless, is similar and based on applying linear transformation on spatial locations and feature channels.
Although inspired by these new MLP architectures, speech-MLP focuses on speech signals rather than images. This implies in processing different input resolutions given the nature of the input signal. The split & glue layer is very similar to a separable CNN (Chen et al., 2018), if we regard the frame-independent processing as 1-D convolution in time. In particular, it is essentially a group-wised CNN (Romero et al., 2020) with different kernels for each group. However, from the perspective of feature learning, the entire split & glue is an MLP if our focus is a particular frame (within a context). That is why a 1-D convolution is often called a time-delay neural net (TDNN) (Waibel et al., 1989). We follow this convention and name our structure as speech-MLP.
A key motivation of the speech-MLP structure is to respect the properties of speech signals. It should be emphasized that almost all successful techniques in speech processing take these properties into account, for instance the hidden Markov model (HMM) assumes short-term dependency (Rabiner & Juang, 1986), TDNN assumes temporal invariance (Waibel et al., 1989), and frequency asymmetry is explicitly implemented in the famous MFCC feature (Mermelstein, 1976). In this paper, the role of knowledge of speech signals is to help remove unnecessary complexity, i.e., seeking the minimum structure that make reflect these basic properties.
Finally, MLP is not new in speech processing; in fact the neural models used in early days in speech processing are all general MLPs, e.g., (Bourlard & Morgan, 2012). Speech-MLP is a special designed MLP, by taking the properties of speech signals into account.
3 METHODOLOGY
Our model, referred to as speech-MLP, is presented in Figure 1. Note that for a given speech waveform, a sequence of acoustic features, denoted by X = {x1, x2, ..., xn}, are first extracted. These features are then fed into N stacked speech-MLP blocks and the output of the last speech-MLP block is a speech representation that needs to undergo task-specific layers in order to perform specific tasks, such as the ones addressed in this study: SE and KWS.
Inside of each speech-MLP block, there are three components: (1) a linear transformation for a pre-projection of the extracted acoustic features; (2) a Split & Glue layer for processing the projected acoustic features while addressing frequency asymmetry and temporal dependency, and (3) another linear transformation for post-projection of the final representation. Two residual connections are also adopted to encourage gradient propagation. The first one maps the input features onto the output of the last linear transformation (i.e., the output of the post-projection operation). The second residual connection maps the output of the first linear transformation (i.e., the output of the pre-projection operation) onto the output of the Split & Glue layer. Note that normalization tech-
niques are also applied to regulate the feature distribution (by layer norm) and temporal variance (by instance norm). In the next section, we give more details on the Split & Glue layer, followed by a discussion on the normalization methods adopted in this work.
3.1 SPLIT & GLUE
Figure 2 depicts how the Split & Glue layer operates. The sequence of acoustic features is denoted by X ∈ RH×T , with T and H being, respectively, the length and the number of channels of the input sequence. The first step is to split X into K non-overlapping chunks, as illustrated in both Figure 1 and Figure 2. The split referred to as X → {X1, .., Xk, .., XK}, is performed along the channel dimension. In our experiments, the channel dimension of each chunk is considered the same, leading to Xk ∈ RH/K×T . For each chunk, Xk, a context expansion is then performed through the so-called unfolding operations. This results in context-expanded chunks, denoted by Xkw ∈ Rw kH/K×T , where wk is the size of the context window induced by the unfolding operation.
Note that the number of chunks K and the window size wk can be arbitrarily selected for each chunk. This flexibility allows us to represent multi-scale contexts by adopting different window sizes for different chunks. In Figure 2, for instance, the input channels are split into two chunks, and the window sizes are set to 3 and 5, respectively. This leads to the model learning from small and large contexts simultaneously.
The unfolded chunk Xkw is projected by a linear transformation, leading to a new representation for the initial chunk, Y k ∈ RĤ×T , where Ĥ could be set arbitrary and is called the number of Glue channels. We highlight that the linear transformation used in the above chunk-wise operation is shared across all the time steps for a single chunk, and each time frame is processed independently. This setting reduces the number of parameters and is compatible with the temporal invariance property of speech signals. Nevertheless, different weight parameters are adopted for different chunks, to provide sufficient flexibility.
Finally, all the learned speech representations, Y i, are concatenated along the channel dimension, forming a glued feature matrix Y G = {Y 1, Y 2, ..., Y K}. Following, another linear transformation
is applied in order to obtain the output feature Y ∈ RH×T . Again, the linear transformation is shared across all the time steps, to reflect temporal invariance.
3.2 NORMALIZATIONS
Normalization plays an important role in our speech-MLP model. We employed two normalization approaches: (1) layer normalization (LN) (Ba et al., 2016) and (2) instance normalization (IN) (Ulyanov et al., 2016).
Layer normalization is applied across the channel dimension at each time step. Thus, it computes statistics (mean and variance) on each column of X ∈ RH×T , and then uses these statistics to normalize the elements in the same column. With this normalization technique, the distribution of the feature vector at each time step is regularized.
Instance normalization is used to perform per-channel normalization. That is, the statistics are computed on each row of X ∈ RH×T and applied across the time steps to normalize the elements of each row. Thus, the temporal variation of each channel is normalized. Note that IN extends the conventional cepstral mean normalization (CMN) approach (Liu et al., 1993), by normalizing not only acoustic features, but also features produced by any hidden layer.
Empirically, we found that IN was only effective for the SE task while the LN was more important for the KWS task. Therefore, we apply LN only for KWS and IN for SE.
4 EXPERIMENTS
We evaluate the proposed speech-MLP model in two speech processing tasks: speech enhancement and keyword spotting. In this section, we introduce these tasks and their respective datasets, used in our experiments, followed by experimental settings, experimental results, and the ablation study.3
3The code will be available on github. To respect the double-blind review, the link will be sent to the reviewers when the discussion is open.
4.1 KEYWORD SPOTTING
Keyword spotting aims at detecting predefined words in speech utterances (Szöke et al., 2005; Mamou et al., 2007; Wang, 2010; Mandal et al., 2014). In our experiments, we explore two KWS datasets: (1) the Google speech commands V2 dataset (Warden, 2018), and (2) the LibriWords (Vygon & Mikhaylovskiy, 2021). The Google speech commands V2 dataset (here, referred to as V2-35) consists of 105, 829 utterances of 35 words, recorded by 2,618 speakers. The training, validation and test sets contain 84, 843, 11, 005 and 9, 981 utterances respectively. The LibriWords dataset, larger and more complex, is derived from 1000-hours of English speech from the LibriSpeech dataset (Panayotov et al., 2015). Signal-to-word alignments were generated using the Montreal Forced Aligner (McAuliffe et al., 2017) and are available in (Lugosch et al., 2019). The averaged duration of the keywords are 0.28 seconds. The provider defined four benchmark tests, based on the number of target keywords: LW-10, LW-100, LW-1K and LW-10K, where the target keywords are 10, 100, 1k and 10k respectively. More details on this dataset are presented in Appendix.
4.1.1 SETTINGS
We used the same architecture in all the KWS tasks, except that the dimension of the output layer was adapted to the number of keywords, as shown in Table 1. Note that we set the window size w to be {3, 7, 9, 11}. This allows us to exploit multi-scale contexts. Additionally, we set the stride to be 1 and appropriately set the padding list p to ensure that all the expanded features are in the same length and equal to that of the input feature.
Prior to the feature extraction step, each speech recording is resampled to 16 kHz. Then, 40- dimensional Mel-Frequency Cepstral Coefficients (MFCC) are attained as the acoustic features. The MFCC features are then projected target dimensional feature vector by a linear layer and then forwarded to speech-MLP blocks. The output features are then passed through a max-pooling operation collects the information across time steps. Finally, two linear layers with a GELU activation function in the middle and a softmax activation are employed in order to attain the posterior probabilities that the input speech belongs to each keyword. For regularization we used SpecAugment (Park et al., 2019), dropout (Baldi & Sadowski, 2013), and label smoothing (Müller et al., 2019) were used to prevent overfitting.
Three model architectures have been verified in all the experiments: a 180k small model denoted by Speech-MLP-S, a 480k large model denoted by Speech-MLP-L, and a 2375K extra large model
denoted by Speech-MLP-XL. The three models are different in the number of channels of the hidden layer (i.e., after the pre-projection) and the channels within the Split & Glue block (i.e., channels after Linear A, and layers in Fig. 2), as shown in Table 1.
For the experiments on the Google speech commands dataset, we applied the following data augmentation techniques: time shifting, audio re-sampling and noise perturbation: as in (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021). After augmentation, the data was increased to 10 times the size of V2-35. We set the batch size to be 256 and trained the model for 100 epochs on 4 cards V100 Nvidia GPU.
For the experiments on the LibriWords, the batch size was set to 1024, and we trained the model for 20 epochs on 2 cards V100 Nvidia GPU which showed to be enough for this dataset. The training schemes were set differently simply because Libriwords is huge and long-term training is not economic.
The performance of the proposed model is compared to three benchmarks. The first one referred to as Att-RNN, is a CNN-LSTM architecture with the attention mechanism introduced in (de Andrade et al., 2018). The model has approximately 202k trainable parameters and attains reasonable performance. Another recent solution, based on a transformer architecture is adopted as the second benchmark (Berg et al., 2021). We refer to this benchmark as KWT-K where K refers to different size of models. Res15 (Vygon & Mikhaylovskiy, 2021), another recent work based on ResNet reports high performance on both V2-35 and Libriwords. The authors reported results with two configurations, one trained by cross entropy (Res15-CE) and the other based on triple loss (Res15-TL). We use them as the third benchmark.
4.1.2 RESULTS
Table 2 presents the results of the benchmarks discussed in the previous section and the performance of the proposed Speech-MLP, the experimental results are presented by mean value and 95% confidence of 5 trials with different random seeds on V2-35. It can be observed that the Speech-MLP models outperform all the benchmarks with comparable model sizes. Note that the small version of speech-MLP, which contains less than half of the parameters of its large version, can still maintain reasonable performance, providing higher accuracy than most benchmarks. The performance of our solution on the Libriword dataset is even more significant. It outperforms Res15-CE and Res15-TL while being able to maintain performance across all LibriWord dataset sizes. Our conjecture is that by the knowledge-driven design, we can use the parameters more efficiently, which allows for the use of smaller models to handle large-scale tasks.
4.1.3 ABLATION STUDY
To investigate how each module impacts the performance of speech-MLP, we conducted an ablation study, in order to fair compare each model we use fixed random seed 123 in all ablation study experiments, we show that window list to {3} equivalent to use TDNN with kernel size to 3, and window list to {3, 3, 3, 3} equivalent the TDNN with 4 groups convolution operation with kernel size
to 3 in split & glue layer, and our proposed speech-MLP with a variance of window sizes outperform these existing solutions. We particularly focus on the chunk splitting, specially the number of chunks and the context window of each chunk. They are the only hyperparameters that we need to design in speech-MLP, by using domain knowledge.
The results are reported in Table 3. It can be observed that the setting for the number of chunks and the context window does matter. A longer context window is clearly beneficial, and setting different context windows for different chunks can further improve the performance. This confirms our conjecture that contextual information is important for representing speech signals, and exploiting multi-scale contextual information is especially important.
An interesting comparison is between the Speech-MLP-S model with window {3, 7, 9, 11} and the Speech-MLP-L model with window {1}. The parameters of the two models are comparable, but the latter model does not involve any chunk splitting and context expansion. The clear advantage of the Speech-MLP-S model demonstrated that the performance improvement with larger and multi-scale context windows (ref. performance of Speech-MLP-S or Speech-MLP-L with different windows) is due to the newly designed Split& Glue structure, rather than the increase in parameters. This in turn demonstrated the value of domain knowledge: if we can exploit it appropriately, it is possible to design very parsimonious models.
4.2 SPEECH ENHANCEMENT
Speech enhancement, which aims at inferring clean speech from its corrupted version (Benesty et al., 2006; Loizou, 2007; Das et al., 2020), is another fundamental task used to evaluate our model. We choose the Voicebank+Demand datasetValentini-Botinhao et al. (2016) to perform the SE test. It contains clean speech signals from the Voicebank dataset, includes 28 speakers for training and 2 speakers for testing. Noise signals of 40 types from the DEMAND atabase Thiemann et al. (2013) were selected and were mixed into the clean speech. After the mixing, the training set and testing set involve 11,572 and 824 clips respectively. We split the training utterances into segments of 3 seconds without overlap. This resulted into 17,989 training samples, each sampling consisting of a noise corrupted segment and the corresponding clean segment. The goal of SE is to learn a mapping function that converts a noisy segment to a clean segment.
4.2.1 SETTINGS
The architecture of our SE model is shown in Table 1. As input, the model receives a 257- dimensional log-magnitude spectrum. The extracted features are first projected by a linear layer and reduced to 256-dimensional feature vector, which are then forwarded to 10 stacked speechMLP blocks. The output from the last speech-MLP block is re-projected to 257-dimensional feature vector. After a hard-sigmoid function Courbariaux et al. (2015), the value of the output units correspond to the ratio masks on the 257-dimensional input log-magnitude spectrum. The clean speech signal is estimated by applying the ratio masks onto the noisy spectrum and reusing the noisy phase.
More details of the settings can be found in Appendix. The performance of the proposed model is compared to six benchmarks. Note that we focus on models trained without extra data, or extra models for knowledge distillation. The reader can find details on these enhancement methods in the references presented in Table 4. Following the convention on this test set, we report the results of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).
4.2.2 RESULTS
The results are shown in Table 4, where we choose 6 baseline systems for comparison. Among these systems, T-GAS (Kim et al., 2020) is based on a transformer model. Similar to speech-MLP, the authors of T-GAS also noticed the importance of local context and designed an annealing approach to encourage attention on neighbour frames. However the attention is still global in nature, and the improvement with T-GAS was still attributed to the capacity of transformers in learning (not so) long-range dependency. Note that the size of the T-GAS model was not reported in the original paper, so we made an estimation according to the structure description.
The results shown in Table 4 demonstrated that our speech-MLP model outperformed all the six baselines. In particular, without modeling any long-range dependency, it outperformed T-GSA by almost 100 times smaller of model size. This comparative results challenge the assumption that the better performance of T-GSA over other baselines is due to its capacity of capturing long-range dependence in speech. Moreover, the model size of Speech-MLP is much smaller than T-GSA, and due to the concise architecture, the training is simple and fast. It provides a strong support for our argument that complex models are not necessarily the best, and a knowledge-based model may easily beat complex models with parsimonious parameters.
5 CONCLUSIONS
In this paper, we propose the speech-MLP model, a simple MLP architecture for speech processing tasks. Our main motivation was to find a compact solution that eliminates unnecessary complexity while being able to capture essential information from speech signals. By utilizing domain knowledge of speech, we designed a simple yet effective structure that involves only linear transform and normalization. The main ingredient is a split & glue structure, which splits input features into multiple chunks and makes them accounting for different contexts. This knowledge-based design reflects several properties of speech signals, including temporal variance, frequency symmetry, and short-term dependency. The experimental results on keyword spotting and speech enhancement demonstrated that speech-MLP is highly effective: with much less parameters and computation, it can beat larger and more elaborately designed models including transformers.
Much work remains, for example, how to design a better chunking and context; how to make the model even smaller (e.g., removing unnecessary residual connections); how to trade off the complexity in chunks and in depth. The ultimate goal is to design a light-weighted, sufficiently powerful and generalizable component for speech feature extraction. We believe the knowledge-driven feature extractor benefits general speech processing tasks, such as speech recognition and understanding.
6 REPRODUCIBILITY STATEMENT
We made the following efforts to ensure that the results reported in the paper can be reproduced by other researchers.
• We will release the code on github, so everyone can download • The datasets used in this paper are all publicly available for researchers • We documented the required python environment and provided a step-by-step guidance for
the reproduction • We fixed the random seed in the code, so that others can reproduce our result exactly.
A APPENDIX A: DETAILS OF KWS EXPERIMENT
In this section, we present the details of the KWS experiment. We start with the system architecture, followed by the data preparation. We then present the training methods and the hyperparameters used in the experiments.
A.1 SYSTEM ARCHITECTURE
Prior to feature extraction, speech signals are resampled to 16 kHz if needed. Then, we use librosa4 to extract a 40-dimensional MFCC features. The parameters used to extract these features are presented in Table 5. Global mean and variance is also applied to normalize the extracted features. These statistics are calculated using the respective training set of each task. After that, the features are fed into the model shown in Figure 3.
Specifically, a linear transformation (Linear 0) operates on the normalized MFCC features, projecting them to 128-dimensional embeddings. These embeddings are then forwarded to stacked SpeechMLP blocks (4 blocks in our KWS study) to extract multiscale contextual representations. For each speech utterance, the last Speech-MLP block outputs a sequence of context-rich representations, and then a max pooling operation is adopted to aggregate this sequence to a single utterance-level representation. This representation is then passed to a 128× 128 linear transformation and a GELU nonlinear activation function. It is then further processed by a 128 ×M linear transformation and a softmax nonlinear activation, where M is the number of keywords. The final output of the above process is a vector that represents the posterior probabilities that the original speech utterance belongs to each keyword.
A.2 DATA PREPARATION
A.2.1 GOOGLE SPEECH COMMANDS
The google speech commands V2-35 contains 35 classes. The data can be obtained at the provider’s website5. There are 84, 843 training samples in total, with strictly no overlapping between training, validation and test sets.
4https://librosa.org/doc/latest/index.html 5http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
Data augmentation techniques have been used to increase the training data by 9 times. Combined with the original data, we have 848, 430 training samples in total. We fixed the random seed to be 59185 when producing the augmented samples. Following are the augmentation strategies adopted in this work:
• Noise perturbation: the noise perturbation script provided by the organizer of the DNS challenge is used to add background noise to clean speech6. The SNR factor is randomly sampled from [5, 10, 15] with equal probabilities;
• Time shifting: time shifting is applied in the time domain. It shifts the waveform by a timeshift factor t sampled from [−T, T ]. In our experiments we set T = 100. When t < 0, the waveform is shifted left by t samples and t zeros are padded to the right side. When t > 0, the waveform is shifted right by t samples and t zeros are padded to left side;
• Resampling: the resample function from scipy (scipy.signal.resample) is used to perform resampling augmentation, which changes the sampling rate slightly. Specifically, given a parameter R, a resampling factor r is drawn from [1−R, 1+R], and the augmented sample is obtained by changing the sampling rate to r×16000. R is set to 0.15 in our experiments.
6We use the segmental snr mixer function from https://github.com/microsoft/ DNS-Challenge/blob/master/audiolib.py
After the above augmentation, the original speech and the augmented speech are further corrupted by SpecAug (Park et al., 2019). The setting of SpecAug is shown in Table 5. Note that SpecAug does not enlarge the dataset.
A.2.2 LIBRIWORDS
The LirbriWords dataset is a larger and more complex dataset. The samples are extracted from the json files provided by the providers7. We follow the task definition of the dataset provider, and the details are given below.
• LibriWords 10 (LW-10): this task contains 10 keywords, including “the”, “and”, “of”, “to”, “a”, “in”, “he”, “I”, “that”, and “was”. There are 1, 750k samples in total, and they are spit to a training set (1, 400k), a validation set (262, 512) and a test set (87, 501).
• LibriWords 100 (LW-100): a more challenging task that contains 100 keywords. There are 1, 512k training samples, 189, 010 validation samples and 188, 968 samples, totalling 1, 890k samples.
• LibriWords 1000 (LW-1K): with increased difficult, this task contains 1000 keywords. The training set involves 2, 178k samples, and the validation set and the test set contain 272, 329 samples and 271, 858 samples respectively.
• LibriWords 10000 (LW-10K): The most challenging task presents 9998 keywords. The training set contains 2, 719k training samples, 339, 849 validation samples and 335, 046 test samples.
Given the large number of samples, data augmentation was not required for this task. We only performed SpecAug (Park et al., 2019) based on the settings presented in Table 5.
A.3 TRAINING PARAMETERS
The parameters used during training are specified in Table 5. Further details are presented below.
• the cross entropy between the model prediction and the ground truth is used as loss function;
• The optimizer used in all the experiments is AdamW. The initial learning rate is set to 0.01, and cosine annealing is applied to adjust the learning rate from 0.01 to 0.0001;
• Dropout is applied onto the residual connections within the speech-MLP block, with the dropout rate set to 0.1;
• Label smoothing is employed to prevent the over-confidence problem. The smoothing factor is set to 0.1;
• In the V2-35 experiment, the models are trained for 100 epochs and 10 epochs warmup is applied, In the LibriWords experiment, the models are trained for 20 epochs without warmup;
• In both the experiments, the model of each epoch is evaluated on the evaluation set, and the checkpoint that performs the best on the validation set is saved to report the performance on test set;
• We fix the random seed to be 123 in all the ablation study experiments, for the sake of reproducibility.
B APPENDIX B: DETAILS OF SE EXPERIMENT
B.1 SYSTEM ARCHITECTURE
The model architecture has been presented in Figure 4. The primary goal is to learn a mapping function that converts noisy magnitude spectrum to clean magnitude spectrum. The model output
7https://github.com/roman-vygon/triplet_loss_kws
predicts the soft ratio masks, that can be applied to the noisy magnitude spectrum to estimate the mangitude spectrum of the clean speech. Combining the denoised magnitude spectrum and the phase spectrum of the original noisy speech, one can attain the denoised waveform by inverse STFT.8
8We used the STFT class implemented in the torch-mfcc toolkit(https://github.com/echocatzh/ torch-mfcc).
More specifically, 257-dimensional log-magnitude spectrum is firstly extracted from the noisy speech as the acoustic features, following the configuration shown in Table 6. Then a linear layer follows and transfers the input features to 256-dimensional vectors PreX . The transformed feature vectors are then forwarded to 10 Speech-MLP blocks, and the output from the last block, denoted by PostX , involves multiscale contextual information. Afterwards, a residual connection adds PreX and PostX together, and instance normalization is applied to regulate temporal variance. Finally, another linear transform and a non-linear HardSigmoid activation projects the normalized feature to a masking space where the dimensionality is the same as the input feature, corresponding the ratio mask M ∈ [0, 1] on the noisy magnitude spectrum.
B.2 LOSS FUNCTIONS
The loss function of our model is computed based on the discrepancy between the denoised speech Xd and the clean speech Xc. The entire loss consists of two parts: (1) the distance on powercomposed magnitude spectrum, denoted by Lmag , and (2) the distance on power-compressed STFT, denoted by Lstft. We use a single frame to demonstrate this computation, where the real loss should compute the average of L on all the frames.
Dreal, Dimag = STFT (Xd)
Creal, Cimag = STFT (Xc) Dmag = √ D2real +Dimag 2
Cmag = √ C2real + Cimag 2
Lmag = (C 0.3 mag − C0.3mag)2
D0.3real = D0.3mag Dmag ×Dreal
D0.3imag = D0.3mag Dmag ×Dimag
C0.3real = C0.3mag Cmag × Creal
C0.3imag = C0.3mag Cmag × Cimag
Lstft = { (C0.3real −D0.3real)2 + (C0.3imag −D0.3imag)2 }2 L = 10× Lmag + Lstft
B.3 TRAINING PARAMETERS
The parameters for model training are summarized in Table 6. Specifically, the model was trained for 1000 epochs using the adamw optimizer. The initial learning was set to 0.01, and a cosine annealling learning scheduler was used to adjust the learning rate from 0.01 to 0.0001 in 3000 steps. Warmup was applied and involved 30 epochs . The model was evaluated on the evaluation set every epoch, and the best checkpoint (in terms of PESQ) on the evaluation set was saved. The results are reported in terms of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).9
C APPENDIX C: PSEUDO CODE FOR SPLIT & GLUE
9The evaluation script is pysepm (https://github.com/schmiph2/pysepm).
Algorithm 1 Pseudo code for Split & Glue Input Sequence: X ∈ RH×T : sequence of acoustic features of T frames and H dimensions Input Parameter: w = {w0, w1, ..., wK}: window sizes of the K chunks Input Parameter: p = {p0, p1, ..., pK}: padding definition for the K chunks Input Parameter: s: stride in context expansion Output Y ∈ RH×T : sequence of output features of T frames and H dimensions
Ensure: H%K = 0 {X1, ..., XK} = chunk(X,H,K) . Split X to K pieces on the channel dimension for k in range(K) do
Xkw = unfold(X k, wk, pk, s) . Context expansion by unfolding Y k = W kAX k w + b k A . Linear projection A for each chunk, where W k A = [Ĥ, w
k ×H/K] end for Y G = [Y 0;Y 1, ..., Y K ] . Concatenate Y k along channel dimension Y G = GELU(Y G)
Y = WBY + bB . Linear projection B to glue the chunks, where WB = [H,K × Ĥ] | 1. What is the focus and contribution of the paper on MLP architecture for speech processing?
2. What are the strengths of the proposed approach, particularly in its simplicity and modularity?
3. What are the weaknesses of the paper, especially regarding its similarity to convolutional neural networks?
4. Do you have any concerns about the experimental setup and comparisons with other works?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors have proposed a new general-purpose MLP architecture for speech processing and learning speech representation for tasks such as keyword spotting and speech enhancement. The proposed model processes the spectral representation of speech in multiple frequency/channel bands ('split' operations). The unfolding operation expands the context of each banded signal and applies linear transformations to it. These transformations are not shared across the chunks/bands which allows the model to learn chunk-wise relevant representations. Further, the unfolding procedure (context expansion) provides the benefit of learning segmental and supra-segmental properties of speech. The transformed outputs are concatenated together which is referred to as the 'glue' operation. Finally, the learned representations are passed into task-specific layers for generating output predictions. There are two residual connections in each split and glue block which ensures that there is a proper flow of gradients during the backpropagation to train the model.
The authors demonstrate the performance of the proposed mlp architecture on two problems, namely, keyword spotting, and speech enhancement. The model shows improvement over the state-of-the-art baseline techniques on multiple datasets (Google V2 and Libriwords) for keyword spotting. The ablation studies show how the choice of multiple linear operator kernels in 'split and glue' perform better than no split model. The speech enhancement experiment has been carried out on VoiceBank+Demand dataset and the authors show improvement by proposed technique across multiple metrics and baseline models. Therefore, the main contribution of this paper is to show that mlp architectures that are heuristically driven and are derived from domain knowledge can outperform state-of-the-art models for various prediction tasks.
Review
Strengths: The proposed model is relatively simple and easy to implement. It leverages some of the important characteristics of human speech such as temporal invariance, frequency asymmetry and short-term stationarity. The model is modular and can be easily integrated into any existing pipeline to process the features in a desired manner. Finally, it shows improvements on multiple tasks and across multiple datasets.
Weaknesses: This model appears to be convolution in disguise. The splitting operation is akin to breaking down the signal into multiple frequency bands and learning a convolution kernel for each band separately, i.e, separable convolutions. The unfolding operation or context expansion is where the model resembles the convolutional structure the most. It is similar to creating a circulant matrix and applying linear transformation to the expanded signal. The linear transformation operator is analogous to the convolutional kernel. Therefore, the authors, unknowingly, are proposing a convolutional neural network with residual connections and labelling it as 'split and glue' model. Further, convolutions can actively take advantage of temporal invariance or short-term stationarity of speech signal which seems to be one of the main drivers of performance gains in this model. Finally, the authors have not clarified if the data augmentation was done for the baseline models or not. The augmentation procedure can have a huge impact on the generalization of neural networks.
The experiments on keyword spotting use cross entropy loss for backpropagation. My suggestion to the authors would be to use the triplet loss for training as it outperforms the entropy based model in baseline comparisons. Additionally, the baseline methods in the speech enhancement task are mostly generative models which are perhaps not trained in a supervised setting. Therefore, comparison with some recently proposed supervised methods for speech enhancement would provide a good sense of the performance improvements. In the ablation study, it would be valuable for the speech and machine learning community to see how the proposed split and glue model performs without the outer residual connection in each block. While the paper compares the proposed approach to a convolutional model (Res-15), the presence of residual connections (inner+outer) and normalization (layer/instance) adds a lot of confounders in the analysis. Finally, a statistical test or error bars on the speech enhancement metrics will help validate the claims made by the authors. |
ICLR | Title
Speech-MLP: a simple MLP architecture for speech processing
Abstract
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
N/A
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
1 INTRODUCTION
As in many machine learning disciplines, speech processing is embracing more and more complex models, where transformer (Vaswani et al., 2017) is a particular example. It was first proposed to tackle machine translation, and afterwards was successfully applied to multiple research fields such as natural language processing (NLP) (Devlin et al., 2018) and computer vision (CV) (Dosovitskiy et al., 2020). The core of the transformer model is a self-attention mechanism, by which any two elements in a sequence can interact with each other, hence capturing long-range dependency. Considering that speech signals are naturally temporal-dependent, researchers in the speech community recently explored transformer-based models in multiple speech processing tasks, and remarkable performance was reported in speech recognition (Dong et al., 2018; Karita et al., 2019; Huang et al., 2020), speech enhancement (SE) (Kim et al., 2020; Fu et al., 2020), keyword spotting (KWS) (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021) and speech synthesis (Li et al., 2019). Recently, the conformer architecture, which combines convolution and self-attention, achieved excellent success in speech processing tasks and attracted much attention Gulati et al. (2020).
In this paper, we ask the following question: Do we need complex models such as transformers for certain speech processing tasks?
This question is closely related to the principle of ‘parsimony of explanations’, a.k.a., Occam’s razor (Walsh, 1979). According to this principle, if there is any possibility, we should seek the models that can represent the data with the least complexity (Rasmussen & Ghahramani, 2001; Blumer et al., 1987). However, in the public benchmark tests, complex and elaborately designed models are often ranked higher, due to the better reported performance. For example, the KWS benchmark on Google
speech command1 and the SE benchmark on VoiceBank+DEMAND2, transformer-based models are among the top ranks. Although the good performance is celebrating, the increased model complexity implies potential over-tuning and over-explanation, the risk that the Occam’s razor principle intends to avoid.
We, therefore, attempt to discover the simplest neural architecture, that is powerful enough to achieve comparable performance as the best existing models, in particular transformers, while eliminating unnecessary complexity. Our design is based on domain knowledge, in particular, three properties of speech signals: (1) temporal invariance, (2) frequency asymmetry, and (3) short-term dependency (Huang et al., 2001; Benesty et al., 2008; Furui, 2018). Based on these knowledge, we build the speech-MLP, a simple multi-layer perceptron (MLP) architecture, shown in Fig. 1. Besides the normalization components, the architecture involves simple linear transformations only. The core of the architecture is the Split & Glue layer, which splits the channel dimension into multiple chunks, processes each chunk separately, and finally merges the processed chunks in order to attain the output. Speech-MLP processes each time frames independently (compatible to temporal invariance), and the splitting & gluing procedure allows different treatments for different frequency bands (compatible to frequency asymmetry), and involves the local context of multiple scales (compatible to short-term dependency),
We tested the model on two speech processing tasks: keyword spotting with the Google speech command V2-35 and Libriword benchmark datasets; and speech enhancement with the VoiceBank benchmark dataset. Results showed that on both tasks the proposed speech-MLP outperforms complex models, in particular models based on transformers. Such results demonstrate that by utilizing domain knowledge and employing appropriate normalization techniques, it is possible to design simple yet powerful models. In some cases, these simple models even beat complex models on open benchmarks, where complex models are more likely to obtain good performance by careful tuning.
In summary, we proposed Speech-MLP, a simple yet effective neural model to represent speech signal. On the KWS and SE tasks, we demonstrated that the simple model can achieve performance comparable to or even better than transformers with less parameters and inference time. Our work shows that by taking domain-knowledge into account, it is possible to remove unnecessary complexity (e.g., modeling for the long-range dependency in KWS and SE) in model design, as advocated by the Occam’s razor.
2 RELATED WORK
Recent research has shown that a simple model can be as effective as complex and task specific models such as transformers in some important tasks. In (Tolstikhin et al., 2021), for example, the authors proposed a simple architecture for vision, namely MLP-Mixer. The model receives a sequence of image patches and performs channel-wise and patch-wise linear projection alternatively and iteratively. Without using convolutions or self-attention, the Mixer architecture separates the per-location (channel-mixing) and cross-location (token-mixing) operations (Tolstikhin et al., 2021). While the channel-mixing MLPs enable communication between different channels, the token-mixing MLPs allow communication between different spatial locations (tokens). Tested on image classification benchmarks, MLP-Mixer achieved performance comparable to SOTA models, in particular the vision transformer model (Tolstikhin et al., 2021).
In another recent work (Liu et al., 2021), the authors investigated the need of the self-attention mechanism in transformers, proposing an alternative MLP-based architecture, namely gMLP. The model, based on MLP layers with gating, consists of a stack of L identical blocks. Each block comprises a normalization layer, a channel projection, followed by an activation function and a spatial gating unit, followed by another channel projection (Liu et al., 2021). It achieves similar performance when compared to the vision transformer model (Touvron et al., 2021b), being 3 % more accurate than the aforementioned MLP-mixter model with 66 % fewer parameters. The model was also successful on language modeling in the BERT setup (Liu et al., 2021), minimizing perplexity as well as Transformers. The authors also found that perplexity reduction was more influenced by the model capacity than by the attention mechanism.
1https://paperswithcode.com/sota/keyword-spotting-on-google-speech-commands 2https://paperswithcode.com/sota/speech-enhancement-on-demand
Inspired by vision transformers (Touvron et al., 2021b)(Dosovitskiy et al., 2020), in (Touvron et al., 2021a), the authors apply the skip connection technique from ResNet’s to MLP layers and propose the so-called Residual Multi-Layer Perceptrons (ResMLP). The model receives non-overlapping image patches, typically 16 × 16. These patches go through a linear transformation in order to attain d-dimensional embeddings. The embeddings are then fed to a sequence of ResMLP blocks to produce a set of d-dimensional output embeddings. An average pooling is applied on the ddimension output vector to represent the image, a linear classifier is used then to predict the label associated with the image (Touvron et al., 2021a).
Differently from Mixer-MLP, gMLP and ResMLP, CycleMLP can process inputs of arbitrary resolution with linear computational complexity as its receptive fields are enlarged for context aggregation (Chen et al., 2021). The model is based on Cycle Fully-Connected Layer (Cycle FC), serving as a generic, plug-and-play transformer-free architecture. Results show CycleMLP outperforming existing MLP-like models on ImageNet classification, achieving good performance on object detection, instance segmentation and semantic segmentation (Chen et al., 2021).
The aforementioned research highlights that, despite their success, convolution and self-attention mechanisms are not mandatory for some CV and NLP tasks, and can be replaced by simpler layers such as MLP with a customized design. Although typical convolution operations are not used by these MLP solutions (but rather 1 × 1 convolution as pointed out in (Chen et al., 2021) and (Tolstikhin et al., 2021)), these MLP approaches are inspired by CNN architectures for computer vision related tasks. Their building block, nonetheless, is similar and based on applying linear transformation on spatial locations and feature channels.
Although inspired by these new MLP architectures, speech-MLP focuses on speech signals rather than images. This implies in processing different input resolutions given the nature of the input signal. The split & glue layer is very similar to a separable CNN (Chen et al., 2018), if we regard the frame-independent processing as 1-D convolution in time. In particular, it is essentially a group-wised CNN (Romero et al., 2020) with different kernels for each group. However, from the perspective of feature learning, the entire split & glue is an MLP if our focus is a particular frame (within a context). That is why a 1-D convolution is often called a time-delay neural net (TDNN) (Waibel et al., 1989). We follow this convention and name our structure as speech-MLP.
A key motivation of the speech-MLP structure is to respect the properties of speech signals. It should be emphasized that almost all successful techniques in speech processing take these properties into account, for instance the hidden Markov model (HMM) assumes short-term dependency (Rabiner & Juang, 1986), TDNN assumes temporal invariance (Waibel et al., 1989), and frequency asymmetry is explicitly implemented in the famous MFCC feature (Mermelstein, 1976). In this paper, the role of knowledge of speech signals is to help remove unnecessary complexity, i.e., seeking the minimum structure that make reflect these basic properties.
Finally, MLP is not new in speech processing; in fact the neural models used in early days in speech processing are all general MLPs, e.g., (Bourlard & Morgan, 2012). Speech-MLP is a special designed MLP, by taking the properties of speech signals into account.
3 METHODOLOGY
Our model, referred to as speech-MLP, is presented in Figure 1. Note that for a given speech waveform, a sequence of acoustic features, denoted by X = {x1, x2, ..., xn}, are first extracted. These features are then fed into N stacked speech-MLP blocks and the output of the last speech-MLP block is a speech representation that needs to undergo task-specific layers in order to perform specific tasks, such as the ones addressed in this study: SE and KWS.
Inside of each speech-MLP block, there are three components: (1) a linear transformation for a pre-projection of the extracted acoustic features; (2) a Split & Glue layer for processing the projected acoustic features while addressing frequency asymmetry and temporal dependency, and (3) another linear transformation for post-projection of the final representation. Two residual connections are also adopted to encourage gradient propagation. The first one maps the input features onto the output of the last linear transformation (i.e., the output of the post-projection operation). The second residual connection maps the output of the first linear transformation (i.e., the output of the pre-projection operation) onto the output of the Split & Glue layer. Note that normalization tech-
niques are also applied to regulate the feature distribution (by layer norm) and temporal variance (by instance norm). In the next section, we give more details on the Split & Glue layer, followed by a discussion on the normalization methods adopted in this work.
3.1 SPLIT & GLUE
Figure 2 depicts how the Split & Glue layer operates. The sequence of acoustic features is denoted by X ∈ RH×T , with T and H being, respectively, the length and the number of channels of the input sequence. The first step is to split X into K non-overlapping chunks, as illustrated in both Figure 1 and Figure 2. The split referred to as X → {X1, .., Xk, .., XK}, is performed along the channel dimension. In our experiments, the channel dimension of each chunk is considered the same, leading to Xk ∈ RH/K×T . For each chunk, Xk, a context expansion is then performed through the so-called unfolding operations. This results in context-expanded chunks, denoted by Xkw ∈ Rw kH/K×T , where wk is the size of the context window induced by the unfolding operation.
Note that the number of chunks K and the window size wk can be arbitrarily selected for each chunk. This flexibility allows us to represent multi-scale contexts by adopting different window sizes for different chunks. In Figure 2, for instance, the input channels are split into two chunks, and the window sizes are set to 3 and 5, respectively. This leads to the model learning from small and large contexts simultaneously.
The unfolded chunk Xkw is projected by a linear transformation, leading to a new representation for the initial chunk, Y k ∈ RĤ×T , where Ĥ could be set arbitrary and is called the number of Glue channels. We highlight that the linear transformation used in the above chunk-wise operation is shared across all the time steps for a single chunk, and each time frame is processed independently. This setting reduces the number of parameters and is compatible with the temporal invariance property of speech signals. Nevertheless, different weight parameters are adopted for different chunks, to provide sufficient flexibility.
Finally, all the learned speech representations, Y i, are concatenated along the channel dimension, forming a glued feature matrix Y G = {Y 1, Y 2, ..., Y K}. Following, another linear transformation
is applied in order to obtain the output feature Y ∈ RH×T . Again, the linear transformation is shared across all the time steps, to reflect temporal invariance.
3.2 NORMALIZATIONS
Normalization plays an important role in our speech-MLP model. We employed two normalization approaches: (1) layer normalization (LN) (Ba et al., 2016) and (2) instance normalization (IN) (Ulyanov et al., 2016).
Layer normalization is applied across the channel dimension at each time step. Thus, it computes statistics (mean and variance) on each column of X ∈ RH×T , and then uses these statistics to normalize the elements in the same column. With this normalization technique, the distribution of the feature vector at each time step is regularized.
Instance normalization is used to perform per-channel normalization. That is, the statistics are computed on each row of X ∈ RH×T and applied across the time steps to normalize the elements of each row. Thus, the temporal variation of each channel is normalized. Note that IN extends the conventional cepstral mean normalization (CMN) approach (Liu et al., 1993), by normalizing not only acoustic features, but also features produced by any hidden layer.
Empirically, we found that IN was only effective for the SE task while the LN was more important for the KWS task. Therefore, we apply LN only for KWS and IN for SE.
4 EXPERIMENTS
We evaluate the proposed speech-MLP model in two speech processing tasks: speech enhancement and keyword spotting. In this section, we introduce these tasks and their respective datasets, used in our experiments, followed by experimental settings, experimental results, and the ablation study.3
3The code will be available on github. To respect the double-blind review, the link will be sent to the reviewers when the discussion is open.
4.1 KEYWORD SPOTTING
Keyword spotting aims at detecting predefined words in speech utterances (Szöke et al., 2005; Mamou et al., 2007; Wang, 2010; Mandal et al., 2014). In our experiments, we explore two KWS datasets: (1) the Google speech commands V2 dataset (Warden, 2018), and (2) the LibriWords (Vygon & Mikhaylovskiy, 2021). The Google speech commands V2 dataset (here, referred to as V2-35) consists of 105, 829 utterances of 35 words, recorded by 2,618 speakers. The training, validation and test sets contain 84, 843, 11, 005 and 9, 981 utterances respectively. The LibriWords dataset, larger and more complex, is derived from 1000-hours of English speech from the LibriSpeech dataset (Panayotov et al., 2015). Signal-to-word alignments were generated using the Montreal Forced Aligner (McAuliffe et al., 2017) and are available in (Lugosch et al., 2019). The averaged duration of the keywords are 0.28 seconds. The provider defined four benchmark tests, based on the number of target keywords: LW-10, LW-100, LW-1K and LW-10K, where the target keywords are 10, 100, 1k and 10k respectively. More details on this dataset are presented in Appendix.
4.1.1 SETTINGS
We used the same architecture in all the KWS tasks, except that the dimension of the output layer was adapted to the number of keywords, as shown in Table 1. Note that we set the window size w to be {3, 7, 9, 11}. This allows us to exploit multi-scale contexts. Additionally, we set the stride to be 1 and appropriately set the padding list p to ensure that all the expanded features are in the same length and equal to that of the input feature.
Prior to the feature extraction step, each speech recording is resampled to 16 kHz. Then, 40- dimensional Mel-Frequency Cepstral Coefficients (MFCC) are attained as the acoustic features. The MFCC features are then projected target dimensional feature vector by a linear layer and then forwarded to speech-MLP blocks. The output features are then passed through a max-pooling operation collects the information across time steps. Finally, two linear layers with a GELU activation function in the middle and a softmax activation are employed in order to attain the posterior probabilities that the input speech belongs to each keyword. For regularization we used SpecAugment (Park et al., 2019), dropout (Baldi & Sadowski, 2013), and label smoothing (Müller et al., 2019) were used to prevent overfitting.
Three model architectures have been verified in all the experiments: a 180k small model denoted by Speech-MLP-S, a 480k large model denoted by Speech-MLP-L, and a 2375K extra large model
denoted by Speech-MLP-XL. The three models are different in the number of channels of the hidden layer (i.e., after the pre-projection) and the channels within the Split & Glue block (i.e., channels after Linear A, and layers in Fig. 2), as shown in Table 1.
For the experiments on the Google speech commands dataset, we applied the following data augmentation techniques: time shifting, audio re-sampling and noise perturbation: as in (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021). After augmentation, the data was increased to 10 times the size of V2-35. We set the batch size to be 256 and trained the model for 100 epochs on 4 cards V100 Nvidia GPU.
For the experiments on the LibriWords, the batch size was set to 1024, and we trained the model for 20 epochs on 2 cards V100 Nvidia GPU which showed to be enough for this dataset. The training schemes were set differently simply because Libriwords is huge and long-term training is not economic.
The performance of the proposed model is compared to three benchmarks. The first one referred to as Att-RNN, is a CNN-LSTM architecture with the attention mechanism introduced in (de Andrade et al., 2018). The model has approximately 202k trainable parameters and attains reasonable performance. Another recent solution, based on a transformer architecture is adopted as the second benchmark (Berg et al., 2021). We refer to this benchmark as KWT-K where K refers to different size of models. Res15 (Vygon & Mikhaylovskiy, 2021), another recent work based on ResNet reports high performance on both V2-35 and Libriwords. The authors reported results with two configurations, one trained by cross entropy (Res15-CE) and the other based on triple loss (Res15-TL). We use them as the third benchmark.
4.1.2 RESULTS
Table 2 presents the results of the benchmarks discussed in the previous section and the performance of the proposed Speech-MLP, the experimental results are presented by mean value and 95% confidence of 5 trials with different random seeds on V2-35. It can be observed that the Speech-MLP models outperform all the benchmarks with comparable model sizes. Note that the small version of speech-MLP, which contains less than half of the parameters of its large version, can still maintain reasonable performance, providing higher accuracy than most benchmarks. The performance of our solution on the Libriword dataset is even more significant. It outperforms Res15-CE and Res15-TL while being able to maintain performance across all LibriWord dataset sizes. Our conjecture is that by the knowledge-driven design, we can use the parameters more efficiently, which allows for the use of smaller models to handle large-scale tasks.
4.1.3 ABLATION STUDY
To investigate how each module impacts the performance of speech-MLP, we conducted an ablation study, in order to fair compare each model we use fixed random seed 123 in all ablation study experiments, we show that window list to {3} equivalent to use TDNN with kernel size to 3, and window list to {3, 3, 3, 3} equivalent the TDNN with 4 groups convolution operation with kernel size
to 3 in split & glue layer, and our proposed speech-MLP with a variance of window sizes outperform these existing solutions. We particularly focus on the chunk splitting, specially the number of chunks and the context window of each chunk. They are the only hyperparameters that we need to design in speech-MLP, by using domain knowledge.
The results are reported in Table 3. It can be observed that the setting for the number of chunks and the context window does matter. A longer context window is clearly beneficial, and setting different context windows for different chunks can further improve the performance. This confirms our conjecture that contextual information is important for representing speech signals, and exploiting multi-scale contextual information is especially important.
An interesting comparison is between the Speech-MLP-S model with window {3, 7, 9, 11} and the Speech-MLP-L model with window {1}. The parameters of the two models are comparable, but the latter model does not involve any chunk splitting and context expansion. The clear advantage of the Speech-MLP-S model demonstrated that the performance improvement with larger and multi-scale context windows (ref. performance of Speech-MLP-S or Speech-MLP-L with different windows) is due to the newly designed Split& Glue structure, rather than the increase in parameters. This in turn demonstrated the value of domain knowledge: if we can exploit it appropriately, it is possible to design very parsimonious models.
4.2 SPEECH ENHANCEMENT
Speech enhancement, which aims at inferring clean speech from its corrupted version (Benesty et al., 2006; Loizou, 2007; Das et al., 2020), is another fundamental task used to evaluate our model. We choose the Voicebank+Demand datasetValentini-Botinhao et al. (2016) to perform the SE test. It contains clean speech signals from the Voicebank dataset, includes 28 speakers for training and 2 speakers for testing. Noise signals of 40 types from the DEMAND atabase Thiemann et al. (2013) were selected and were mixed into the clean speech. After the mixing, the training set and testing set involve 11,572 and 824 clips respectively. We split the training utterances into segments of 3 seconds without overlap. This resulted into 17,989 training samples, each sampling consisting of a noise corrupted segment and the corresponding clean segment. The goal of SE is to learn a mapping function that converts a noisy segment to a clean segment.
4.2.1 SETTINGS
The architecture of our SE model is shown in Table 1. As input, the model receives a 257- dimensional log-magnitude spectrum. The extracted features are first projected by a linear layer and reduced to 256-dimensional feature vector, which are then forwarded to 10 stacked speechMLP blocks. The output from the last speech-MLP block is re-projected to 257-dimensional feature vector. After a hard-sigmoid function Courbariaux et al. (2015), the value of the output units correspond to the ratio masks on the 257-dimensional input log-magnitude spectrum. The clean speech signal is estimated by applying the ratio masks onto the noisy spectrum and reusing the noisy phase.
More details of the settings can be found in Appendix. The performance of the proposed model is compared to six benchmarks. Note that we focus on models trained without extra data, or extra models for knowledge distillation. The reader can find details on these enhancement methods in the references presented in Table 4. Following the convention on this test set, we report the results of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).
4.2.2 RESULTS
The results are shown in Table 4, where we choose 6 baseline systems for comparison. Among these systems, T-GAS (Kim et al., 2020) is based on a transformer model. Similar to speech-MLP, the authors of T-GAS also noticed the importance of local context and designed an annealing approach to encourage attention on neighbour frames. However the attention is still global in nature, and the improvement with T-GAS was still attributed to the capacity of transformers in learning (not so) long-range dependency. Note that the size of the T-GAS model was not reported in the original paper, so we made an estimation according to the structure description.
The results shown in Table 4 demonstrated that our speech-MLP model outperformed all the six baselines. In particular, without modeling any long-range dependency, it outperformed T-GSA by almost 100 times smaller of model size. This comparative results challenge the assumption that the better performance of T-GSA over other baselines is due to its capacity of capturing long-range dependence in speech. Moreover, the model size of Speech-MLP is much smaller than T-GSA, and due to the concise architecture, the training is simple and fast. It provides a strong support for our argument that complex models are not necessarily the best, and a knowledge-based model may easily beat complex models with parsimonious parameters.
5 CONCLUSIONS
In this paper, we propose the speech-MLP model, a simple MLP architecture for speech processing tasks. Our main motivation was to find a compact solution that eliminates unnecessary complexity while being able to capture essential information from speech signals. By utilizing domain knowledge of speech, we designed a simple yet effective structure that involves only linear transform and normalization. The main ingredient is a split & glue structure, which splits input features into multiple chunks and makes them accounting for different contexts. This knowledge-based design reflects several properties of speech signals, including temporal variance, frequency symmetry, and short-term dependency. The experimental results on keyword spotting and speech enhancement demonstrated that speech-MLP is highly effective: with much less parameters and computation, it can beat larger and more elaborately designed models including transformers.
Much work remains, for example, how to design a better chunking and context; how to make the model even smaller (e.g., removing unnecessary residual connections); how to trade off the complexity in chunks and in depth. The ultimate goal is to design a light-weighted, sufficiently powerful and generalizable component for speech feature extraction. We believe the knowledge-driven feature extractor benefits general speech processing tasks, such as speech recognition and understanding.
6 REPRODUCIBILITY STATEMENT
We made the following efforts to ensure that the results reported in the paper can be reproduced by other researchers.
• We will release the code on github, so everyone can download • The datasets used in this paper are all publicly available for researchers • We documented the required python environment and provided a step-by-step guidance for
the reproduction • We fixed the random seed in the code, so that others can reproduce our result exactly.
A APPENDIX A: DETAILS OF KWS EXPERIMENT
In this section, we present the details of the KWS experiment. We start with the system architecture, followed by the data preparation. We then present the training methods and the hyperparameters used in the experiments.
A.1 SYSTEM ARCHITECTURE
Prior to feature extraction, speech signals are resampled to 16 kHz if needed. Then, we use librosa4 to extract a 40-dimensional MFCC features. The parameters used to extract these features are presented in Table 5. Global mean and variance is also applied to normalize the extracted features. These statistics are calculated using the respective training set of each task. After that, the features are fed into the model shown in Figure 3.
Specifically, a linear transformation (Linear 0) operates on the normalized MFCC features, projecting them to 128-dimensional embeddings. These embeddings are then forwarded to stacked SpeechMLP blocks (4 blocks in our KWS study) to extract multiscale contextual representations. For each speech utterance, the last Speech-MLP block outputs a sequence of context-rich representations, and then a max pooling operation is adopted to aggregate this sequence to a single utterance-level representation. This representation is then passed to a 128× 128 linear transformation and a GELU nonlinear activation function. It is then further processed by a 128 ×M linear transformation and a softmax nonlinear activation, where M is the number of keywords. The final output of the above process is a vector that represents the posterior probabilities that the original speech utterance belongs to each keyword.
A.2 DATA PREPARATION
A.2.1 GOOGLE SPEECH COMMANDS
The google speech commands V2-35 contains 35 classes. The data can be obtained at the provider’s website5. There are 84, 843 training samples in total, with strictly no overlapping between training, validation and test sets.
4https://librosa.org/doc/latest/index.html 5http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
Data augmentation techniques have been used to increase the training data by 9 times. Combined with the original data, we have 848, 430 training samples in total. We fixed the random seed to be 59185 when producing the augmented samples. Following are the augmentation strategies adopted in this work:
• Noise perturbation: the noise perturbation script provided by the organizer of the DNS challenge is used to add background noise to clean speech6. The SNR factor is randomly sampled from [5, 10, 15] with equal probabilities;
• Time shifting: time shifting is applied in the time domain. It shifts the waveform by a timeshift factor t sampled from [−T, T ]. In our experiments we set T = 100. When t < 0, the waveform is shifted left by t samples and t zeros are padded to the right side. When t > 0, the waveform is shifted right by t samples and t zeros are padded to left side;
• Resampling: the resample function from scipy (scipy.signal.resample) is used to perform resampling augmentation, which changes the sampling rate slightly. Specifically, given a parameter R, a resampling factor r is drawn from [1−R, 1+R], and the augmented sample is obtained by changing the sampling rate to r×16000. R is set to 0.15 in our experiments.
6We use the segmental snr mixer function from https://github.com/microsoft/ DNS-Challenge/blob/master/audiolib.py
After the above augmentation, the original speech and the augmented speech are further corrupted by SpecAug (Park et al., 2019). The setting of SpecAug is shown in Table 5. Note that SpecAug does not enlarge the dataset.
A.2.2 LIBRIWORDS
The LirbriWords dataset is a larger and more complex dataset. The samples are extracted from the json files provided by the providers7. We follow the task definition of the dataset provider, and the details are given below.
• LibriWords 10 (LW-10): this task contains 10 keywords, including “the”, “and”, “of”, “to”, “a”, “in”, “he”, “I”, “that”, and “was”. There are 1, 750k samples in total, and they are spit to a training set (1, 400k), a validation set (262, 512) and a test set (87, 501).
• LibriWords 100 (LW-100): a more challenging task that contains 100 keywords. There are 1, 512k training samples, 189, 010 validation samples and 188, 968 samples, totalling 1, 890k samples.
• LibriWords 1000 (LW-1K): with increased difficult, this task contains 1000 keywords. The training set involves 2, 178k samples, and the validation set and the test set contain 272, 329 samples and 271, 858 samples respectively.
• LibriWords 10000 (LW-10K): The most challenging task presents 9998 keywords. The training set contains 2, 719k training samples, 339, 849 validation samples and 335, 046 test samples.
Given the large number of samples, data augmentation was not required for this task. We only performed SpecAug (Park et al., 2019) based on the settings presented in Table 5.
A.3 TRAINING PARAMETERS
The parameters used during training are specified in Table 5. Further details are presented below.
• the cross entropy between the model prediction and the ground truth is used as loss function;
• The optimizer used in all the experiments is AdamW. The initial learning rate is set to 0.01, and cosine annealing is applied to adjust the learning rate from 0.01 to 0.0001;
• Dropout is applied onto the residual connections within the speech-MLP block, with the dropout rate set to 0.1;
• Label smoothing is employed to prevent the over-confidence problem. The smoothing factor is set to 0.1;
• In the V2-35 experiment, the models are trained for 100 epochs and 10 epochs warmup is applied, In the LibriWords experiment, the models are trained for 20 epochs without warmup;
• In both the experiments, the model of each epoch is evaluated on the evaluation set, and the checkpoint that performs the best on the validation set is saved to report the performance on test set;
• We fix the random seed to be 123 in all the ablation study experiments, for the sake of reproducibility.
B APPENDIX B: DETAILS OF SE EXPERIMENT
B.1 SYSTEM ARCHITECTURE
The model architecture has been presented in Figure 4. The primary goal is to learn a mapping function that converts noisy magnitude spectrum to clean magnitude spectrum. The model output
7https://github.com/roman-vygon/triplet_loss_kws
predicts the soft ratio masks, that can be applied to the noisy magnitude spectrum to estimate the mangitude spectrum of the clean speech. Combining the denoised magnitude spectrum and the phase spectrum of the original noisy speech, one can attain the denoised waveform by inverse STFT.8
8We used the STFT class implemented in the torch-mfcc toolkit(https://github.com/echocatzh/ torch-mfcc).
More specifically, 257-dimensional log-magnitude spectrum is firstly extracted from the noisy speech as the acoustic features, following the configuration shown in Table 6. Then a linear layer follows and transfers the input features to 256-dimensional vectors PreX . The transformed feature vectors are then forwarded to 10 Speech-MLP blocks, and the output from the last block, denoted by PostX , involves multiscale contextual information. Afterwards, a residual connection adds PreX and PostX together, and instance normalization is applied to regulate temporal variance. Finally, another linear transform and a non-linear HardSigmoid activation projects the normalized feature to a masking space where the dimensionality is the same as the input feature, corresponding the ratio mask M ∈ [0, 1] on the noisy magnitude spectrum.
B.2 LOSS FUNCTIONS
The loss function of our model is computed based on the discrepancy between the denoised speech Xd and the clean speech Xc. The entire loss consists of two parts: (1) the distance on powercomposed magnitude spectrum, denoted by Lmag , and (2) the distance on power-compressed STFT, denoted by Lstft. We use a single frame to demonstrate this computation, where the real loss should compute the average of L on all the frames.
Dreal, Dimag = STFT (Xd)
Creal, Cimag = STFT (Xc) Dmag = √ D2real +Dimag 2
Cmag = √ C2real + Cimag 2
Lmag = (C 0.3 mag − C0.3mag)2
D0.3real = D0.3mag Dmag ×Dreal
D0.3imag = D0.3mag Dmag ×Dimag
C0.3real = C0.3mag Cmag × Creal
C0.3imag = C0.3mag Cmag × Cimag
Lstft = { (C0.3real −D0.3real)2 + (C0.3imag −D0.3imag)2 }2 L = 10× Lmag + Lstft
B.3 TRAINING PARAMETERS
The parameters for model training are summarized in Table 6. Specifically, the model was trained for 1000 epochs using the adamw optimizer. The initial learning was set to 0.01, and a cosine annealling learning scheduler was used to adjust the learning rate from 0.01 to 0.0001 in 3000 steps. Warmup was applied and involved 30 epochs . The model was evaluated on the evaluation set every epoch, and the best checkpoint (in terms of PESQ) on the evaluation set was saved. The results are reported in terms of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).9
C APPENDIX C: PSEUDO CODE FOR SPLIT & GLUE
9The evaluation script is pysepm (https://github.com/schmiph2/pysepm).
Algorithm 1 Pseudo code for Split & Glue Input Sequence: X ∈ RH×T : sequence of acoustic features of T frames and H dimensions Input Parameter: w = {w0, w1, ..., wK}: window sizes of the K chunks Input Parameter: p = {p0, p1, ..., pK}: padding definition for the K chunks Input Parameter: s: stride in context expansion Output Y ∈ RH×T : sequence of output features of T frames and H dimensions
Ensure: H%K = 0 {X1, ..., XK} = chunk(X,H,K) . Split X to K pieces on the channel dimension for k in range(K) do
Xkw = unfold(X k, wk, pk, s) . Context expansion by unfolding Y k = W kAX k w + b k A . Linear projection A for each chunk, where W k A = [Ĥ, w
k ×H/K] end for Y G = [Y 0;Y 1, ..., Y K ] . Concatenate Y k along channel dimension Y G = GELU(Y G)
Y = WBY + bB . Linear projection B to glue the chunks, where WB = [H,K × Ĥ] | 1. What is the focus and contribution of the paper on speech signals?
2. What are the strengths of the proposed Speech-MLP architecture, particularly in keyword spotting?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the appropriateness of the Speech-MLP name, the lack of description of Mixer MLP, and the minor modifications made to it?
6. Is there a need for more significant and relevant measures for speech enhancement, and a subjective test to confirm the preliminary results?
7. Does the paper adequately address the domain knowledge and uniqueness of the proposed architecture compared to other speech processing architectures like transformers and TDNN? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents Speech-MLP, an architecture based on Mixer MLP but specific for speech signals. The Mixer MLP architecture is argued to be appropriate for the particular structure of speech. The architecture is compared to others on keyword spotting and speech enhancement. In each case, the architecture outperforms other competitive solutions, often with fewer parameters.
Review
I begin with the experiments because that is the positive part, at least for keyword spotting. The performance is good, and that is in itself interesting. This is especially true for the smaller architecture, where there are far fewer parameters. Although the evaluation on speech enhancement produces good objective results, the measures are hardly significant. A 0.02 improvement in PESQ is not regarded as significant, and the others are derivatives of PESQ. Modern measures such as STOI and frequency weighted segmental SNR are missing. The enhancement results can only be taken as preliminary, suggesting that a subjective test is worthwhile.
I find the method part less persuasive, beginning with the claim that the contribution is threefold. The combination of propose, test and demonstrate is one contribution. Thereafter I have multiple difficulties: The authors claim early in the manuscript that they do not expect Speech-MLP to work for speech recognition. Given the speech recognition is the main application of machine learning in speech processing, Speech-MLP is not really a good name. Pragmatically, the longer range dependencies required for speech recognition are likely to come from other architectures. The authors introduce their architecture simply by describing it. Lacking is a description of Mixer MLP and how the new architecture differs from it. My reading of this section is that Speech-MLP is Mixer-MLP with some minor modifications (it is not clear what they are). Further, where in the introductory material, the authors claim that their solution is based on domain knowledge, I do not see how the proposed architecture addresses this where Mixer MLP would not. The descriptive comparison with transformers is selective; it is true that transformers discard this type of domain knowledge, but every other speech processing architecture does take it into account. A case in point is the TDNN used in many solutions. In general, The method section should be rewritten to explain how and why the proposal differs from Mixer-MLP, and to place it in the context of other common signal processing techniques that also take such structure into account. All results should include significance tests. |
ICLR | Title
Speech-MLP: a simple MLP architecture for speech processing
Abstract
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
N/A
Transformers have shown outstanding performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis and speech enhancement. In this paper, we show that, despite their success, such complex models are not needed for some important speech related tasks, which can be solved with much simpler and compact models. Thus, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. These chunks are then merged together and further processed to consolidate the output. By setting different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, two benchmark datasets are adopted for keyword spotting (Google speech command V2-35 and LibriWords) and one dataset (VoiceBank) for speech enhancement. In all experiments, speech-MLP surpassed the transformer-based solutions, achieving better performance with fewer parameters lower GFLOPS. Such results indicate that more complex models, such as transformers, are oftentimes not necessary for speech processing tasks. Hence, simpler and more compact models should always be considered as an alternative, specially in resource-constrained scenarios.
1 INTRODUCTION
As in many machine learning disciplines, speech processing is embracing more and more complex models, where transformer (Vaswani et al., 2017) is a particular example. It was first proposed to tackle machine translation, and afterwards was successfully applied to multiple research fields such as natural language processing (NLP) (Devlin et al., 2018) and computer vision (CV) (Dosovitskiy et al., 2020). The core of the transformer model is a self-attention mechanism, by which any two elements in a sequence can interact with each other, hence capturing long-range dependency. Considering that speech signals are naturally temporal-dependent, researchers in the speech community recently explored transformer-based models in multiple speech processing tasks, and remarkable performance was reported in speech recognition (Dong et al., 2018; Karita et al., 2019; Huang et al., 2020), speech enhancement (SE) (Kim et al., 2020; Fu et al., 2020), keyword spotting (KWS) (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021) and speech synthesis (Li et al., 2019). Recently, the conformer architecture, which combines convolution and self-attention, achieved excellent success in speech processing tasks and attracted much attention Gulati et al. (2020).
In this paper, we ask the following question: Do we need complex models such as transformers for certain speech processing tasks?
This question is closely related to the principle of ‘parsimony of explanations’, a.k.a., Occam’s razor (Walsh, 1979). According to this principle, if there is any possibility, we should seek the models that can represent the data with the least complexity (Rasmussen & Ghahramani, 2001; Blumer et al., 1987). However, in the public benchmark tests, complex and elaborately designed models are often ranked higher, due to the better reported performance. For example, the KWS benchmark on Google
speech command1 and the SE benchmark on VoiceBank+DEMAND2, transformer-based models are among the top ranks. Although the good performance is celebrating, the increased model complexity implies potential over-tuning and over-explanation, the risk that the Occam’s razor principle intends to avoid.
We, therefore, attempt to discover the simplest neural architecture, that is powerful enough to achieve comparable performance as the best existing models, in particular transformers, while eliminating unnecessary complexity. Our design is based on domain knowledge, in particular, three properties of speech signals: (1) temporal invariance, (2) frequency asymmetry, and (3) short-term dependency (Huang et al., 2001; Benesty et al., 2008; Furui, 2018). Based on these knowledge, we build the speech-MLP, a simple multi-layer perceptron (MLP) architecture, shown in Fig. 1. Besides the normalization components, the architecture involves simple linear transformations only. The core of the architecture is the Split & Glue layer, which splits the channel dimension into multiple chunks, processes each chunk separately, and finally merges the processed chunks in order to attain the output. Speech-MLP processes each time frames independently (compatible to temporal invariance), and the splitting & gluing procedure allows different treatments for different frequency bands (compatible to frequency asymmetry), and involves the local context of multiple scales (compatible to short-term dependency),
We tested the model on two speech processing tasks: keyword spotting with the Google speech command V2-35 and Libriword benchmark datasets; and speech enhancement with the VoiceBank benchmark dataset. Results showed that on both tasks the proposed speech-MLP outperforms complex models, in particular models based on transformers. Such results demonstrate that by utilizing domain knowledge and employing appropriate normalization techniques, it is possible to design simple yet powerful models. In some cases, these simple models even beat complex models on open benchmarks, where complex models are more likely to obtain good performance by careful tuning.
In summary, we proposed Speech-MLP, a simple yet effective neural model to represent speech signal. On the KWS and SE tasks, we demonstrated that the simple model can achieve performance comparable to or even better than transformers with less parameters and inference time. Our work shows that by taking domain-knowledge into account, it is possible to remove unnecessary complexity (e.g., modeling for the long-range dependency in KWS and SE) in model design, as advocated by the Occam’s razor.
2 RELATED WORK
Recent research has shown that a simple model can be as effective as complex and task specific models such as transformers in some important tasks. In (Tolstikhin et al., 2021), for example, the authors proposed a simple architecture for vision, namely MLP-Mixer. The model receives a sequence of image patches and performs channel-wise and patch-wise linear projection alternatively and iteratively. Without using convolutions or self-attention, the Mixer architecture separates the per-location (channel-mixing) and cross-location (token-mixing) operations (Tolstikhin et al., 2021). While the channel-mixing MLPs enable communication between different channels, the token-mixing MLPs allow communication between different spatial locations (tokens). Tested on image classification benchmarks, MLP-Mixer achieved performance comparable to SOTA models, in particular the vision transformer model (Tolstikhin et al., 2021).
In another recent work (Liu et al., 2021), the authors investigated the need of the self-attention mechanism in transformers, proposing an alternative MLP-based architecture, namely gMLP. The model, based on MLP layers with gating, consists of a stack of L identical blocks. Each block comprises a normalization layer, a channel projection, followed by an activation function and a spatial gating unit, followed by another channel projection (Liu et al., 2021). It achieves similar performance when compared to the vision transformer model (Touvron et al., 2021b), being 3 % more accurate than the aforementioned MLP-mixter model with 66 % fewer parameters. The model was also successful on language modeling in the BERT setup (Liu et al., 2021), minimizing perplexity as well as Transformers. The authors also found that perplexity reduction was more influenced by the model capacity than by the attention mechanism.
1https://paperswithcode.com/sota/keyword-spotting-on-google-speech-commands 2https://paperswithcode.com/sota/speech-enhancement-on-demand
Inspired by vision transformers (Touvron et al., 2021b)(Dosovitskiy et al., 2020), in (Touvron et al., 2021a), the authors apply the skip connection technique from ResNet’s to MLP layers and propose the so-called Residual Multi-Layer Perceptrons (ResMLP). The model receives non-overlapping image patches, typically 16 × 16. These patches go through a linear transformation in order to attain d-dimensional embeddings. The embeddings are then fed to a sequence of ResMLP blocks to produce a set of d-dimensional output embeddings. An average pooling is applied on the ddimension output vector to represent the image, a linear classifier is used then to predict the label associated with the image (Touvron et al., 2021a).
Differently from Mixer-MLP, gMLP and ResMLP, CycleMLP can process inputs of arbitrary resolution with linear computational complexity as its receptive fields are enlarged for context aggregation (Chen et al., 2021). The model is based on Cycle Fully-Connected Layer (Cycle FC), serving as a generic, plug-and-play transformer-free architecture. Results show CycleMLP outperforming existing MLP-like models on ImageNet classification, achieving good performance on object detection, instance segmentation and semantic segmentation (Chen et al., 2021).
The aforementioned research highlights that, despite their success, convolution and self-attention mechanisms are not mandatory for some CV and NLP tasks, and can be replaced by simpler layers such as MLP with a customized design. Although typical convolution operations are not used by these MLP solutions (but rather 1 × 1 convolution as pointed out in (Chen et al., 2021) and (Tolstikhin et al., 2021)), these MLP approaches are inspired by CNN architectures for computer vision related tasks. Their building block, nonetheless, is similar and based on applying linear transformation on spatial locations and feature channels.
Although inspired by these new MLP architectures, speech-MLP focuses on speech signals rather than images. This implies in processing different input resolutions given the nature of the input signal. The split & glue layer is very similar to a separable CNN (Chen et al., 2018), if we regard the frame-independent processing as 1-D convolution in time. In particular, it is essentially a group-wised CNN (Romero et al., 2020) with different kernels for each group. However, from the perspective of feature learning, the entire split & glue is an MLP if our focus is a particular frame (within a context). That is why a 1-D convolution is often called a time-delay neural net (TDNN) (Waibel et al., 1989). We follow this convention and name our structure as speech-MLP.
A key motivation of the speech-MLP structure is to respect the properties of speech signals. It should be emphasized that almost all successful techniques in speech processing take these properties into account, for instance the hidden Markov model (HMM) assumes short-term dependency (Rabiner & Juang, 1986), TDNN assumes temporal invariance (Waibel et al., 1989), and frequency asymmetry is explicitly implemented in the famous MFCC feature (Mermelstein, 1976). In this paper, the role of knowledge of speech signals is to help remove unnecessary complexity, i.e., seeking the minimum structure that make reflect these basic properties.
Finally, MLP is not new in speech processing; in fact the neural models used in early days in speech processing are all general MLPs, e.g., (Bourlard & Morgan, 2012). Speech-MLP is a special designed MLP, by taking the properties of speech signals into account.
3 METHODOLOGY
Our model, referred to as speech-MLP, is presented in Figure 1. Note that for a given speech waveform, a sequence of acoustic features, denoted by X = {x1, x2, ..., xn}, are first extracted. These features are then fed into N stacked speech-MLP blocks and the output of the last speech-MLP block is a speech representation that needs to undergo task-specific layers in order to perform specific tasks, such as the ones addressed in this study: SE and KWS.
Inside of each speech-MLP block, there are three components: (1) a linear transformation for a pre-projection of the extracted acoustic features; (2) a Split & Glue layer for processing the projected acoustic features while addressing frequency asymmetry and temporal dependency, and (3) another linear transformation for post-projection of the final representation. Two residual connections are also adopted to encourage gradient propagation. The first one maps the input features onto the output of the last linear transformation (i.e., the output of the post-projection operation). The second residual connection maps the output of the first linear transformation (i.e., the output of the pre-projection operation) onto the output of the Split & Glue layer. Note that normalization tech-
niques are also applied to regulate the feature distribution (by layer norm) and temporal variance (by instance norm). In the next section, we give more details on the Split & Glue layer, followed by a discussion on the normalization methods adopted in this work.
3.1 SPLIT & GLUE
Figure 2 depicts how the Split & Glue layer operates. The sequence of acoustic features is denoted by X ∈ RH×T , with T and H being, respectively, the length and the number of channels of the input sequence. The first step is to split X into K non-overlapping chunks, as illustrated in both Figure 1 and Figure 2. The split referred to as X → {X1, .., Xk, .., XK}, is performed along the channel dimension. In our experiments, the channel dimension of each chunk is considered the same, leading to Xk ∈ RH/K×T . For each chunk, Xk, a context expansion is then performed through the so-called unfolding operations. This results in context-expanded chunks, denoted by Xkw ∈ Rw kH/K×T , where wk is the size of the context window induced by the unfolding operation.
Note that the number of chunks K and the window size wk can be arbitrarily selected for each chunk. This flexibility allows us to represent multi-scale contexts by adopting different window sizes for different chunks. In Figure 2, for instance, the input channels are split into two chunks, and the window sizes are set to 3 and 5, respectively. This leads to the model learning from small and large contexts simultaneously.
The unfolded chunk Xkw is projected by a linear transformation, leading to a new representation for the initial chunk, Y k ∈ RĤ×T , where Ĥ could be set arbitrary and is called the number of Glue channels. We highlight that the linear transformation used in the above chunk-wise operation is shared across all the time steps for a single chunk, and each time frame is processed independently. This setting reduces the number of parameters and is compatible with the temporal invariance property of speech signals. Nevertheless, different weight parameters are adopted for different chunks, to provide sufficient flexibility.
Finally, all the learned speech representations, Y i, are concatenated along the channel dimension, forming a glued feature matrix Y G = {Y 1, Y 2, ..., Y K}. Following, another linear transformation
is applied in order to obtain the output feature Y ∈ RH×T . Again, the linear transformation is shared across all the time steps, to reflect temporal invariance.
3.2 NORMALIZATIONS
Normalization plays an important role in our speech-MLP model. We employed two normalization approaches: (1) layer normalization (LN) (Ba et al., 2016) and (2) instance normalization (IN) (Ulyanov et al., 2016).
Layer normalization is applied across the channel dimension at each time step. Thus, it computes statistics (mean and variance) on each column of X ∈ RH×T , and then uses these statistics to normalize the elements in the same column. With this normalization technique, the distribution of the feature vector at each time step is regularized.
Instance normalization is used to perform per-channel normalization. That is, the statistics are computed on each row of X ∈ RH×T and applied across the time steps to normalize the elements of each row. Thus, the temporal variation of each channel is normalized. Note that IN extends the conventional cepstral mean normalization (CMN) approach (Liu et al., 1993), by normalizing not only acoustic features, but also features produced by any hidden layer.
Empirically, we found that IN was only effective for the SE task while the LN was more important for the KWS task. Therefore, we apply LN only for KWS and IN for SE.
4 EXPERIMENTS
We evaluate the proposed speech-MLP model in two speech processing tasks: speech enhancement and keyword spotting. In this section, we introduce these tasks and their respective datasets, used in our experiments, followed by experimental settings, experimental results, and the ablation study.3
3The code will be available on github. To respect the double-blind review, the link will be sent to the reviewers when the discussion is open.
4.1 KEYWORD SPOTTING
Keyword spotting aims at detecting predefined words in speech utterances (Szöke et al., 2005; Mamou et al., 2007; Wang, 2010; Mandal et al., 2014). In our experiments, we explore two KWS datasets: (1) the Google speech commands V2 dataset (Warden, 2018), and (2) the LibriWords (Vygon & Mikhaylovskiy, 2021). The Google speech commands V2 dataset (here, referred to as V2-35) consists of 105, 829 utterances of 35 words, recorded by 2,618 speakers. The training, validation and test sets contain 84, 843, 11, 005 and 9, 981 utterances respectively. The LibriWords dataset, larger and more complex, is derived from 1000-hours of English speech from the LibriSpeech dataset (Panayotov et al., 2015). Signal-to-word alignments were generated using the Montreal Forced Aligner (McAuliffe et al., 2017) and are available in (Lugosch et al., 2019). The averaged duration of the keywords are 0.28 seconds. The provider defined four benchmark tests, based on the number of target keywords: LW-10, LW-100, LW-1K and LW-10K, where the target keywords are 10, 100, 1k and 10k respectively. More details on this dataset are presented in Appendix.
4.1.1 SETTINGS
We used the same architecture in all the KWS tasks, except that the dimension of the output layer was adapted to the number of keywords, as shown in Table 1. Note that we set the window size w to be {3, 7, 9, 11}. This allows us to exploit multi-scale contexts. Additionally, we set the stride to be 1 and appropriately set the padding list p to ensure that all the expanded features are in the same length and equal to that of the input feature.
Prior to the feature extraction step, each speech recording is resampled to 16 kHz. Then, 40- dimensional Mel-Frequency Cepstral Coefficients (MFCC) are attained as the acoustic features. The MFCC features are then projected target dimensional feature vector by a linear layer and then forwarded to speech-MLP blocks. The output features are then passed through a max-pooling operation collects the information across time steps. Finally, two linear layers with a GELU activation function in the middle and a softmax activation are employed in order to attain the posterior probabilities that the input speech belongs to each keyword. For regularization we used SpecAugment (Park et al., 2019), dropout (Baldi & Sadowski, 2013), and label smoothing (Müller et al., 2019) were used to prevent overfitting.
Three model architectures have been verified in all the experiments: a 180k small model denoted by Speech-MLP-S, a 480k large model denoted by Speech-MLP-L, and a 2375K extra large model
denoted by Speech-MLP-XL. The three models are different in the number of channels of the hidden layer (i.e., after the pre-projection) and the channels within the Split & Glue block (i.e., channels after Linear A, and layers in Fig. 2), as shown in Table 1.
For the experiments on the Google speech commands dataset, we applied the following data augmentation techniques: time shifting, audio re-sampling and noise perturbation: as in (Berg et al., 2021; Vygon & Mikhaylovskiy, 2021). After augmentation, the data was increased to 10 times the size of V2-35. We set the batch size to be 256 and trained the model for 100 epochs on 4 cards V100 Nvidia GPU.
For the experiments on the LibriWords, the batch size was set to 1024, and we trained the model for 20 epochs on 2 cards V100 Nvidia GPU which showed to be enough for this dataset. The training schemes were set differently simply because Libriwords is huge and long-term training is not economic.
The performance of the proposed model is compared to three benchmarks. The first one referred to as Att-RNN, is a CNN-LSTM architecture with the attention mechanism introduced in (de Andrade et al., 2018). The model has approximately 202k trainable parameters and attains reasonable performance. Another recent solution, based on a transformer architecture is adopted as the second benchmark (Berg et al., 2021). We refer to this benchmark as KWT-K where K refers to different size of models. Res15 (Vygon & Mikhaylovskiy, 2021), another recent work based on ResNet reports high performance on both V2-35 and Libriwords. The authors reported results with two configurations, one trained by cross entropy (Res15-CE) and the other based on triple loss (Res15-TL). We use them as the third benchmark.
4.1.2 RESULTS
Table 2 presents the results of the benchmarks discussed in the previous section and the performance of the proposed Speech-MLP, the experimental results are presented by mean value and 95% confidence of 5 trials with different random seeds on V2-35. It can be observed that the Speech-MLP models outperform all the benchmarks with comparable model sizes. Note that the small version of speech-MLP, which contains less than half of the parameters of its large version, can still maintain reasonable performance, providing higher accuracy than most benchmarks. The performance of our solution on the Libriword dataset is even more significant. It outperforms Res15-CE and Res15-TL while being able to maintain performance across all LibriWord dataset sizes. Our conjecture is that by the knowledge-driven design, we can use the parameters more efficiently, which allows for the use of smaller models to handle large-scale tasks.
4.1.3 ABLATION STUDY
To investigate how each module impacts the performance of speech-MLP, we conducted an ablation study, in order to fair compare each model we use fixed random seed 123 in all ablation study experiments, we show that window list to {3} equivalent to use TDNN with kernel size to 3, and window list to {3, 3, 3, 3} equivalent the TDNN with 4 groups convolution operation with kernel size
to 3 in split & glue layer, and our proposed speech-MLP with a variance of window sizes outperform these existing solutions. We particularly focus on the chunk splitting, specially the number of chunks and the context window of each chunk. They are the only hyperparameters that we need to design in speech-MLP, by using domain knowledge.
The results are reported in Table 3. It can be observed that the setting for the number of chunks and the context window does matter. A longer context window is clearly beneficial, and setting different context windows for different chunks can further improve the performance. This confirms our conjecture that contextual information is important for representing speech signals, and exploiting multi-scale contextual information is especially important.
An interesting comparison is between the Speech-MLP-S model with window {3, 7, 9, 11} and the Speech-MLP-L model with window {1}. The parameters of the two models are comparable, but the latter model does not involve any chunk splitting and context expansion. The clear advantage of the Speech-MLP-S model demonstrated that the performance improvement with larger and multi-scale context windows (ref. performance of Speech-MLP-S or Speech-MLP-L with different windows) is due to the newly designed Split& Glue structure, rather than the increase in parameters. This in turn demonstrated the value of domain knowledge: if we can exploit it appropriately, it is possible to design very parsimonious models.
4.2 SPEECH ENHANCEMENT
Speech enhancement, which aims at inferring clean speech from its corrupted version (Benesty et al., 2006; Loizou, 2007; Das et al., 2020), is another fundamental task used to evaluate our model. We choose the Voicebank+Demand datasetValentini-Botinhao et al. (2016) to perform the SE test. It contains clean speech signals from the Voicebank dataset, includes 28 speakers for training and 2 speakers for testing. Noise signals of 40 types from the DEMAND atabase Thiemann et al. (2013) were selected and were mixed into the clean speech. After the mixing, the training set and testing set involve 11,572 and 824 clips respectively. We split the training utterances into segments of 3 seconds without overlap. This resulted into 17,989 training samples, each sampling consisting of a noise corrupted segment and the corresponding clean segment. The goal of SE is to learn a mapping function that converts a noisy segment to a clean segment.
4.2.1 SETTINGS
The architecture of our SE model is shown in Table 1. As input, the model receives a 257- dimensional log-magnitude spectrum. The extracted features are first projected by a linear layer and reduced to 256-dimensional feature vector, which are then forwarded to 10 stacked speechMLP blocks. The output from the last speech-MLP block is re-projected to 257-dimensional feature vector. After a hard-sigmoid function Courbariaux et al. (2015), the value of the output units correspond to the ratio masks on the 257-dimensional input log-magnitude spectrum. The clean speech signal is estimated by applying the ratio masks onto the noisy spectrum and reusing the noisy phase.
More details of the settings can be found in Appendix. The performance of the proposed model is compared to six benchmarks. Note that we focus on models trained without extra data, or extra models for knowledge distillation. The reader can find details on these enhancement methods in the references presented in Table 4. Following the convention on this test set, we report the results of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).
4.2.2 RESULTS
The results are shown in Table 4, where we choose 6 baseline systems for comparison. Among these systems, T-GAS (Kim et al., 2020) is based on a transformer model. Similar to speech-MLP, the authors of T-GAS also noticed the importance of local context and designed an annealing approach to encourage attention on neighbour frames. However the attention is still global in nature, and the improvement with T-GAS was still attributed to the capacity of transformers in learning (not so) long-range dependency. Note that the size of the T-GAS model was not reported in the original paper, so we made an estimation according to the structure description.
The results shown in Table 4 demonstrated that our speech-MLP model outperformed all the six baselines. In particular, without modeling any long-range dependency, it outperformed T-GSA by almost 100 times smaller of model size. This comparative results challenge the assumption that the better performance of T-GSA over other baselines is due to its capacity of capturing long-range dependence in speech. Moreover, the model size of Speech-MLP is much smaller than T-GSA, and due to the concise architecture, the training is simple and fast. It provides a strong support for our argument that complex models are not necessarily the best, and a knowledge-based model may easily beat complex models with parsimonious parameters.
5 CONCLUSIONS
In this paper, we propose the speech-MLP model, a simple MLP architecture for speech processing tasks. Our main motivation was to find a compact solution that eliminates unnecessary complexity while being able to capture essential information from speech signals. By utilizing domain knowledge of speech, we designed a simple yet effective structure that involves only linear transform and normalization. The main ingredient is a split & glue structure, which splits input features into multiple chunks and makes them accounting for different contexts. This knowledge-based design reflects several properties of speech signals, including temporal variance, frequency symmetry, and short-term dependency. The experimental results on keyword spotting and speech enhancement demonstrated that speech-MLP is highly effective: with much less parameters and computation, it can beat larger and more elaborately designed models including transformers.
Much work remains, for example, how to design a better chunking and context; how to make the model even smaller (e.g., removing unnecessary residual connections); how to trade off the complexity in chunks and in depth. The ultimate goal is to design a light-weighted, sufficiently powerful and generalizable component for speech feature extraction. We believe the knowledge-driven feature extractor benefits general speech processing tasks, such as speech recognition and understanding.
6 REPRODUCIBILITY STATEMENT
We made the following efforts to ensure that the results reported in the paper can be reproduced by other researchers.
• We will release the code on github, so everyone can download • The datasets used in this paper are all publicly available for researchers • We documented the required python environment and provided a step-by-step guidance for
the reproduction • We fixed the random seed in the code, so that others can reproduce our result exactly.
A APPENDIX A: DETAILS OF KWS EXPERIMENT
In this section, we present the details of the KWS experiment. We start with the system architecture, followed by the data preparation. We then present the training methods and the hyperparameters used in the experiments.
A.1 SYSTEM ARCHITECTURE
Prior to feature extraction, speech signals are resampled to 16 kHz if needed. Then, we use librosa4 to extract a 40-dimensional MFCC features. The parameters used to extract these features are presented in Table 5. Global mean and variance is also applied to normalize the extracted features. These statistics are calculated using the respective training set of each task. After that, the features are fed into the model shown in Figure 3.
Specifically, a linear transformation (Linear 0) operates on the normalized MFCC features, projecting them to 128-dimensional embeddings. These embeddings are then forwarded to stacked SpeechMLP blocks (4 blocks in our KWS study) to extract multiscale contextual representations. For each speech utterance, the last Speech-MLP block outputs a sequence of context-rich representations, and then a max pooling operation is adopted to aggregate this sequence to a single utterance-level representation. This representation is then passed to a 128× 128 linear transformation and a GELU nonlinear activation function. It is then further processed by a 128 ×M linear transformation and a softmax nonlinear activation, where M is the number of keywords. The final output of the above process is a vector that represents the posterior probabilities that the original speech utterance belongs to each keyword.
A.2 DATA PREPARATION
A.2.1 GOOGLE SPEECH COMMANDS
The google speech commands V2-35 contains 35 classes. The data can be obtained at the provider’s website5. There are 84, 843 training samples in total, with strictly no overlapping between training, validation and test sets.
4https://librosa.org/doc/latest/index.html 5http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
Data augmentation techniques have been used to increase the training data by 9 times. Combined with the original data, we have 848, 430 training samples in total. We fixed the random seed to be 59185 when producing the augmented samples. Following are the augmentation strategies adopted in this work:
• Noise perturbation: the noise perturbation script provided by the organizer of the DNS challenge is used to add background noise to clean speech6. The SNR factor is randomly sampled from [5, 10, 15] with equal probabilities;
• Time shifting: time shifting is applied in the time domain. It shifts the waveform by a timeshift factor t sampled from [−T, T ]. In our experiments we set T = 100. When t < 0, the waveform is shifted left by t samples and t zeros are padded to the right side. When t > 0, the waveform is shifted right by t samples and t zeros are padded to left side;
• Resampling: the resample function from scipy (scipy.signal.resample) is used to perform resampling augmentation, which changes the sampling rate slightly. Specifically, given a parameter R, a resampling factor r is drawn from [1−R, 1+R], and the augmented sample is obtained by changing the sampling rate to r×16000. R is set to 0.15 in our experiments.
6We use the segmental snr mixer function from https://github.com/microsoft/ DNS-Challenge/blob/master/audiolib.py
After the above augmentation, the original speech and the augmented speech are further corrupted by SpecAug (Park et al., 2019). The setting of SpecAug is shown in Table 5. Note that SpecAug does not enlarge the dataset.
A.2.2 LIBRIWORDS
The LirbriWords dataset is a larger and more complex dataset. The samples are extracted from the json files provided by the providers7. We follow the task definition of the dataset provider, and the details are given below.
• LibriWords 10 (LW-10): this task contains 10 keywords, including “the”, “and”, “of”, “to”, “a”, “in”, “he”, “I”, “that”, and “was”. There are 1, 750k samples in total, and they are spit to a training set (1, 400k), a validation set (262, 512) and a test set (87, 501).
• LibriWords 100 (LW-100): a more challenging task that contains 100 keywords. There are 1, 512k training samples, 189, 010 validation samples and 188, 968 samples, totalling 1, 890k samples.
• LibriWords 1000 (LW-1K): with increased difficult, this task contains 1000 keywords. The training set involves 2, 178k samples, and the validation set and the test set contain 272, 329 samples and 271, 858 samples respectively.
• LibriWords 10000 (LW-10K): The most challenging task presents 9998 keywords. The training set contains 2, 719k training samples, 339, 849 validation samples and 335, 046 test samples.
Given the large number of samples, data augmentation was not required for this task. We only performed SpecAug (Park et al., 2019) based on the settings presented in Table 5.
A.3 TRAINING PARAMETERS
The parameters used during training are specified in Table 5. Further details are presented below.
• the cross entropy between the model prediction and the ground truth is used as loss function;
• The optimizer used in all the experiments is AdamW. The initial learning rate is set to 0.01, and cosine annealing is applied to adjust the learning rate from 0.01 to 0.0001;
• Dropout is applied onto the residual connections within the speech-MLP block, with the dropout rate set to 0.1;
• Label smoothing is employed to prevent the over-confidence problem. The smoothing factor is set to 0.1;
• In the V2-35 experiment, the models are trained for 100 epochs and 10 epochs warmup is applied, In the LibriWords experiment, the models are trained for 20 epochs without warmup;
• In both the experiments, the model of each epoch is evaluated on the evaluation set, and the checkpoint that performs the best on the validation set is saved to report the performance on test set;
• We fix the random seed to be 123 in all the ablation study experiments, for the sake of reproducibility.
B APPENDIX B: DETAILS OF SE EXPERIMENT
B.1 SYSTEM ARCHITECTURE
The model architecture has been presented in Figure 4. The primary goal is to learn a mapping function that converts noisy magnitude spectrum to clean magnitude spectrum. The model output
7https://github.com/roman-vygon/triplet_loss_kws
predicts the soft ratio masks, that can be applied to the noisy magnitude spectrum to estimate the mangitude spectrum of the clean speech. Combining the denoised magnitude spectrum and the phase spectrum of the original noisy speech, one can attain the denoised waveform by inverse STFT.8
8We used the STFT class implemented in the torch-mfcc toolkit(https://github.com/echocatzh/ torch-mfcc).
More specifically, 257-dimensional log-magnitude spectrum is firstly extracted from the noisy speech as the acoustic features, following the configuration shown in Table 6. Then a linear layer follows and transfers the input features to 256-dimensional vectors PreX . The transformed feature vectors are then forwarded to 10 Speech-MLP blocks, and the output from the last block, denoted by PostX , involves multiscale contextual information. Afterwards, a residual connection adds PreX and PostX together, and instance normalization is applied to regulate temporal variance. Finally, another linear transform and a non-linear HardSigmoid activation projects the normalized feature to a masking space where the dimensionality is the same as the input feature, corresponding the ratio mask M ∈ [0, 1] on the noisy magnitude spectrum.
B.2 LOSS FUNCTIONS
The loss function of our model is computed based on the discrepancy between the denoised speech Xd and the clean speech Xc. The entire loss consists of two parts: (1) the distance on powercomposed magnitude spectrum, denoted by Lmag , and (2) the distance on power-compressed STFT, denoted by Lstft. We use a single frame to demonstrate this computation, where the real loss should compute the average of L on all the frames.
Dreal, Dimag = STFT (Xd)
Creal, Cimag = STFT (Xc) Dmag = √ D2real +Dimag 2
Cmag = √ C2real + Cimag 2
Lmag = (C 0.3 mag − C0.3mag)2
D0.3real = D0.3mag Dmag ×Dreal
D0.3imag = D0.3mag Dmag ×Dimag
C0.3real = C0.3mag Cmag × Creal
C0.3imag = C0.3mag Cmag × Cimag
Lstft = { (C0.3real −D0.3real)2 + (C0.3imag −D0.3imag)2 }2 L = 10× Lmag + Lstft
B.3 TRAINING PARAMETERS
The parameters for model training are summarized in Table 6. Specifically, the model was trained for 1000 epochs using the adamw optimizer. The initial learning was set to 0.01, and a cosine annealling learning scheduler was used to adjust the learning rate from 0.01 to 0.0001 in 3000 steps. Warmup was applied and involved 30 epochs . The model was evaluated on the evaluation set every epoch, and the best checkpoint (in terms of PESQ) on the evaluation set was saved. The results are reported in terms of four metrics: PESQ, BAK, SIG and OVL (Hu & Loizou, 2007).9
C APPENDIX C: PSEUDO CODE FOR SPLIT & GLUE
9The evaluation script is pysepm (https://github.com/schmiph2/pysepm).
Algorithm 1 Pseudo code for Split & Glue Input Sequence: X ∈ RH×T : sequence of acoustic features of T frames and H dimensions Input Parameter: w = {w0, w1, ..., wK}: window sizes of the K chunks Input Parameter: p = {p0, p1, ..., pK}: padding definition for the K chunks Input Parameter: s: stride in context expansion Output Y ∈ RH×T : sequence of output features of T frames and H dimensions
Ensure: H%K = 0 {X1, ..., XK} = chunk(X,H,K) . Split X to K pieces on the channel dimension for k in range(K) do
Xkw = unfold(X k, wk, pk, s) . Context expansion by unfolding Y k = W kAX k w + b k A . Linear projection A for each chunk, where W k A = [Ĥ, w
k ×H/K] end for Y G = [Y 0;Y 1, ..., Y K ] . Concatenate Y k along channel dimension Y G = GELU(Y G)
Y = WBY + bB . Linear projection B to glue the chunks, where WB = [H,K × Ĥ] | 1. What is the novelty of the proposed "split-and-glue" layer in the speech-MLP architecture?
2. How does the performance of speech-MLP compare to other state-of-the-art models on keyword spotting tasks, particularly in terms of parameter efficiency and inference speed?
3. How does the paper's claim of "SOTA" performance on KWS tasks hold up against recent literature, including Matchboxnet, Keyword Transformer, Audiomer, and Audio Spectrogram Transformer?
4. How might the subjective judgments of model complexity be refined or reframed to focus more objectively on benefits such as parameter efficiency and inference speed?
5. Would comparing the performance of speech-MLP to more recent VoiceBank-DEMAND papers strengthen the paper's claims?
6. Are there any potential issues with cherry-picking results that support the claim of "speech-MLP is SOTA," while omitting others that do not support this claim? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a new architecture for speech processing, dubbed "speech-MLP". Besides input layers and output task-specific layers, speech-MLP consists of linear layers, residual connections, instance or layer normalization, GELU activations, and a layer called the "split-and-glue" layer. The split-and-glue layer splits a [batch, time, channels] tensor into N chunks of shape [batch, time, channels / N] along the channels dimension, then applies an unfold operation to provide temporal context, then a linear layer (with a different matrix for each chunk), then concatenates the results back together. Speech-MLP is tested on keyword spotting (with 2 datasets: Google Speech Commands and LibriWords) as well as speech enhancement (voicebank + demand).
Review
Strengths:
The paper is well written and clearly describes the model used and the experiments performs. The descriptions are accompanied with high-quality diagrams and specifications for hyperparameters. The motivations behind model choices are also explained.
Weaknesses:
One main contribution of the paper, the "split and glue" layer, seems like a groupwise convolution layer. A groupwise convolution layer can be implemented by unfold (im2col) followed by a linear layer and concatenation. An unfold followed by a linear layer is a convolution. This significantly reduces the novelty and undermines the claim that the "architecture involves simple linear transformations only". Prior art on keyword spotting with convolutional networks is not hard to find (e.g. [0]). If my analysis is incorrect, and "split-and-glue" cannot be reduced to a groupwise convolution or at least something very similar to it, the paper can be much improved by a comparison to groupwise convolution.
The paper compares KWS results on Google Speech Commands and LibriWords. Although Google Speech Commands is a well known dataset for KWS, LibriWords is not commonly used and as far as I can tell has only ever been evaluated with once in a single paper that proposed this dataset. I would focus on Google Speech Command results, as that dataset has been widely studied and benchmarked against. Although extra datasets do not reduce the quality of the paper, evaluating on LibriWords does not add much to the paper.
A key part of the claim is that speech-MLP is "SOTA" on KWS. "It can be observed that the speech-MLP-S and speech-MLP-L outperform all the benchmarks for all task". However, upon cursory examination of recent literature, this claim seems dubious. Matchboxnet ([1]) achieves 97.37% on V2-35 with 140k parameters. Keyword Transformer (KWT) [2] achieves 97.51%. Note that KWT is even cited and compared against, but the result from KWT that the authors choose to cite is KWT1 (which speech-MLP outperforms) and not KWT2 or KWT3 (which outperform speech-MLP). (Arguably, this is because KWT2 and KWT3 have way more parameters, but this is not clear at all in the paper, and it's unclear if this is important). More recent works (Audiomer and Audio Spectrogram Transformer [4]) significantly outperform speech-MLP with 99.74 and 98.1%. The paper would be much improved by citing all relevant results rather than cherry-picking the results that support the claim of "speech-MLP is SOTA" and omitting others.
Much of the introduction and motivation is based around model aesthetic cleanliness, that is, around claims that one model is 'more complex' than another and that simpler models are preferable. However, model complexity is subjective and not well defined. While the authors of the paper believe that transformers are a complex model and that the prosed speech-MLP model is "simple", this may not match the expectations of others, for whom a single architecture commonly known and used across industry and research is simpler than a custom architecture with a variety of dataset-specific tweaks. It may be best to leave subjective judgments of complexity out and focus on other benefits of the proposed model, e.g. parameter efficiency, inference speed, etc.
One reason that speech-MLP may make a good architecture is parameter size and efficiency. In comparison to KWT2, which achieves similar / better results, it is much smaller. Other KWS papers will make comparisons of inference time as the model scales; this may help make the case for this architecture as well.
Unlike many of the cited papers, none of the values have standard deviations or confidence intervals. It would help understand the results to have these.
May be good to compare to more recent VoiceBank-DEMAND papers as well.
[0] https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43969.pdf [1] https://arxiv.org/pdf/2004.08531.pdf [2] https://arxiv.org/pdf/2104.00769.pdf [4] https://arxiv.org/pdf/2104.01778v3.pdf |
ICLR | Title
Adversarial Exploration Strategy for Self-Supervised Imitation Learning
Abstract
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
1 INTRODUCTION
Over the past decade, imitation learning (IL) has been successfully applied to a wide range of domains, including robot learning (Englert et al., 2013; Schulman et al., 2013), autonomous navigation (Choudhury et al., 2017; Ross et al., 2013), manipulation tasks (Nair et al., 2017; Prieur et al., 2012), and self-driving cars (Codevilla et al., 2018). Traditionally, IL aims to train an imitator to learn a control policy π only from expert demonstrations. The imitator is typically presented with multiple demonstrations during the training phase, with an aim to distill them into π. To learn π effectively and efficiently, a large set of high-quality demonstrations are necessary. This is especially prevalent in current state-of-the-art IL algorithms, such as dataset aggregation (DAgger) (Ross et al., 2011) and generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Although these approaches have been the dominant algorithms in IL, a major bottleneck for them is their reliance on high-quality demonstrations, which often require extensive supervision from human experts. In addition, a serious flaw in the learned policy π is its tendency to overfit to demonstration data, preventing it from generalizing to new ones. To overcome the aforementioned challenges in IL, a number of methods have been investigated to enhance the generalizability and data efficiency, or reduce the degree of human supervision. Initial efforts in this direction were based on the idea of meta learning (Duan et al., 2017; Finn et al., 2017; Yu et al., 2018), in which the imitator is trained from a meta learner that is able to quickly learn a new task with only a few set of demonstrations. However, such schemes still require training the meta-learner with tremendous amount of time and demonstration data, leaving much room for improvement. Thus, a rapidly-growing body of literature based on the concept of using forward/inverse dynamics models to learn π within an environment in a self-supervised fashion (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018) has emerged in the past few years. One key advantage of the concept is that it provides an autonomous way for preparing training data, removing the need of human intervention. In this paper, we call it self-supervised IL.
Self-supervised IL allows an imitator to collect training data by itself instead of using predefined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. Although the core principles of self-supervised IL are straightforward and have been exploited in many fields (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2017; 2018), recent research efforts have been dedicated
to addressing the challenges of multi-modality and multi-step planning. For example, the use of forward consistency loss and forward regularizer have been extensively investigated to enhance the task performance of the imitator (Agrawal et al., 2016; Pathak et al., 2018). This becomes especially essential when the lengths of trajectories grow and demonstration samples are sparse, as multiple paths may co-exist to lead the imitator from its initial observation to the goal observation. The issue of multi-step planning has also drawn a lot of attention from researchers, and is usually tackled by recurrent neural networks (RNNs) and step-by-step demonstrations (Nair et al., 2017; Pathak et al., 2018). The above self-supervised IL approaches report promising results, however, most of them are limited in applicability due to several drawbacks. First, traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection results in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. Second, human bias in data sampling range tailored to specific interesting configurations is often employed (Agrawal et al., 2016; Nair et al., 2017). Although a more general exploration strategy called curiosity-driven exploration was later proposed in Pathak et al. (2017), it focuses only on exploration in states novel to the forward dynamics model, rather than those directly influential to the inverse dynamics model. Furthermore, it does not discuss the applicability to continuous control domains, and fails in high dimensional action spaces according to our experiments in Section 4. Unlike the approaches discussed above, we do not propose to deal with multi-modality or multi-step planning. Instead, we focus our attention on improving the overall quality of the collected samples in the context of self-supervised IL. This motivates us to equip the model with the necessary knowledge to explore the environment in an efficient and effective fashion.
In this paper, we propose a straightforward and efficient self-supervised IL scheme, called adversarial exploration strategy, which motivates exploration of an environment in a self-supervised manner (i.e., without any extrinsic reward or human demonstration). Inspired by Pinto et al. (2017); Shioya et al. (2018); Sukhbaatar et al. (2018), we implement the proposed strategy by jointly training a deep reinforcement learning (DRL) agent and an inverse dynamics model competing with each other. The former explores the environment to collect training data for the latter, and receives rewards from the latter if the data samples are considered difficult. The latter is trained with the training data collected by the former, and only generates rewards when it fails to predict the true actions performed by the former. In such an adversarial setting, the DRL agent is rewarded only for the failure of the inverse dynamics model. Therefore, the DRL agent learns to sample hard examples to maximize the chances to fail the inverse dynamics model. On the other hand, the inverse dynamics model learns to be robust to the hard examples collected by the DRL agent by minimizing the probability of failures. As a result, as the inverse dynamics model becomes stronger, the DRL agent is also incentivized to search for harder examples to obtain rewards. Overly hard examples, however, may lead to biased exploration and cause instability of the learning process. In order to stabilize the learning curve of the inverse dynamics model, we further propose a reward structure such that the DRL agent is encouraged to explore moderately hard examples for the inverse dynamics model, but refraining from too difficult ones for the latter to learn. The self-regulating feedback structure between the DRL agent and the inverse dynamics model enables them to automatically construct a curriculum for exploration.
We perform extensive experiments to validate adversarial exploration strategy on multiple OpenAI gym (Brockman et al., 2016) robotic arm and hand manipulation task environments simulated by the MuJoCo physics engine (Todorov et al., 2012), including FetchReach, FetchPush, FetchPickAndPlace, FetchSlide, and HandReach. These environments are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions to transition the robotic arms and hands to target observations. We examine the effectiveness of our method by comparing it against a number of self-supervised IL schemes. The experimental results show that our method is more effective and data-efficient than the other self-supervised IL schemes for both low- and high-dimensional observation spaces, as well as in environments with high-dimensional action spaces. We also demonstrate that in most of the cases the performance of the inverse dynamics model trained by our method is comparable to that directly trained with expert demonstrations. The above observations suggest that our method is superior to the other self-supervised IL schemes even in the absence of human priors. We further evaluate our method on environments with action space perturbations, and show that our method is able to achieve satisfactory success rates. To justify each of our design decisions, we provide a comprehensive set of ablative analysis and discuss their implications. The contributions of this work are summarized as follows:
• We introduce an adversarial exploration strategy for self-supervised IL. It consists of a DRL agent and an inverse dynamics model developed for efficient exploration and data collection.
• We employ a competitive scheme for the DRL agent and the inverse dynamics model, enabling them to automatically construct a curriculum for exploration of observation space.
• We introduce a reward structure for the proposed scheme to stabilize the training process. • We demonstrate the proposed method and compare it with a number of baselines for multiple
robotic arm and hand manipulation tasks in both low- and high-dimensional state spaces. • We validate that our method is generalizable to tasks with high-dimensional action spaces.
The remainder of this paper is organized as follows. Section 2 introduces background material. Section 3 describes the proposed adversarial exploration strategy in detail. Section 4 reports the experimental results, and provides an in-depth ablative analysis of our method. Section 5 concludes.
2 BACKGROUND
In this section, we briefly review DRL, policy gradient methods, as well as inverse dynamics model.
2.1 DEEP REINFORCEMENT LEARNING AND POLICY GRADIENT METHODS
DRL trains an agent to interact with an environment E . At each timestep t, the agent receives an observation xt ∈ X , where X is the observation space of E . It then takes an action at from the action space A based on its current policy π, receives a reward r, and transitions to the next observation x′. The policy π is represented by a deep neural network with parameters θ, and is expressed as π(a|x, θ). The goal of the agent is to learn a policy to maximize the discounted sum of rewards Gt:
Gt = T∑ τ=t γτ−tr(xτ , aτ ), (1)
where t is the current timestep, γ ∈ (0, 1] the discount factor, and T the horizon. Policy gradient methods (Mnih et al., 2016; Sutton et al., 2000; Williams, 1992) are a class of RL techniques that directly optimize the parameters of a stochastic policy approximator using policy gradients. Although these methods have achieved remarkable success in a variety of domains, the high variance of gradient estimates has been a major challenge. Trust region policy optimization (TRPO) (Schulman et al., 2015) circumvented this problem by applying a trust-region constraint to the scale of policy updates. However, TRPO is a second-order algorithm, which is relatively complicated and not compatible with architectures that embrace noise or parameter sharing (Schulman et al., 2017). In this paper, we employ a more recent family of policy gradient methods, called proximal policy optimization (PPO) (Schulman et al., 2017). PPO is an approximation to TRPO, which similarly prevents large changes to the policy between updates, but requires only first-order optimization. PPO is superior in its generalizability and sample complexity while retaining the stability and reliability of TRPO 1.
2.2 INVERSE DYNAMICS MODEL
An inverse dynamics model I takes as input a pair of observations (x, x′), and predicts the action â required to reach the next observation x′ from the current observation x. It is formally expressed as:
â = I(x, x′|θI), (2)
where (x, x′) are sampled from the collected data, and θI represents the trainable parameters of I . During the training phase, θI is iteratively updated to minimize the loss function LI , expressed as:
LI(a, â|θI) = d(a, â), (3)
where d is a distance metric, and a the ground truth action. During the testing phase, a sequence of observations {x̂0, x̂1, · · · , x̂T } is first captured from an expert demonstration. A pair of observations (x̂t, x̂t+1) is then fed into I at each timestep t. Starting from x̂0, the objective of I is to predict a sequence of actions {â0, â1, · · · , âT−1} and transition the final observation x̂T as close as possible.
3 METHODOLOGY
In this section, we first describe the proposed adversarial exploration strategy. We then explain the training methodology in detail. Finally, we discuss a technique for stabilizing the training process.
3.1 ADVERSARIAL EXPLORATION STRATEGY
Fig. 1 shows a framework that illustrates the proposed adversarial exploration strategy, which includes a DRL agent P and an inverse dynamics model I . Assume that Φπ : {x0, a0, x1, a1 · · · , xT } is the
1For more details on PPO, please refer to supplementary material S.2.
sequence of observations and actions generated by P as it explores E using a policy π. At each timestep t, P collects a 3-tuple training sample (xt, at, xt+1) for I , while I predicts an action ât and generates a reward rt for P . In this work, I is modified from Eq. (2) to include an additional hidden vector ht, which recurrently encodes the information of the past observations. I is thus expressed as:
ât = I(xt, xt+1|ht, θI) ht = f(ht−1, xt),
(4)
where f(·) denotes the recurrent function. θI is iteratively updated to minimize LI , formulated as:
min θI LI(at, ât|θI) = min θI β||at − ât||2, (5)
where β is a scaling constant. We employ mean squared error β||at − ât||2 as the distance metric d(at, ât), since we only consider continuous control domains in this paper. It can be replaced with a cross-entropy loss for discrete control tasks. We directly use LI as the reward rt for P , expressed as:
rt(xt, at, xt+1) = LI(at, ât|θI) = β||at − I(xt, xt+1|ht, θI)||2. (6)
Our method targets at improving both the quality and efficiency of the data collection process performed by P , as well as the performance of I . Therefore, the goal of the proposed framework is twofold. First, P has to learn an adversarial policy πadv(at|xt) such that its cumulated discounted rewards Gt|πadv = ∑T τ=t γ
τ−trt(xτ , aτ , xτ+1) is maximized. Second, I requires to learn an optimal θI such that Eq. (6) is minimized. Minimizing LI (i.e., rt) leads to decreased Gt|πadv , forcing P to enhance πadv to explore more difficult samples to increase Gt|πadv . This implies that P is motivated to focus on I’s weak points, instead of randomly collecting ineffective training samples. Training I with hard samples not only accelerates its learning progress, but also helps to boost its performance.
3.2 TRAINING METHODOLOGY
We describe the training methodology of our adversarial exploration strategy by a pseudocode presented in Algorithm 1. Assume that P ’s policy πadv is parameterized by a set of trainable parameters θP , and is represented as πadv(at|xt, θP ). We create two buffers ZP and ZI for storing the training samples of P and I , respectively. In the beginning, ZP , ZI , E , θP , θI , πadv , as well as a timestep cumulative counter c are initialized. A number of hyperparameters are set to appropriate values, including the number of iterations Niter, the number of episodes Nepisode, the horizon T , as well as the update period TP of θP . At each timestep t, P perceives the current observation xt from E , takes an action at according to πadv(at|xt, θP ), and receives the next observation xt+1 and a termination indicator ξ (lines 9-11). ξ is set to 1 only when t equals T , otherwise it is set to 0. We then store (xt, at, xt+1, ξ) and (xt, at, xt+1) in ZP and ZI , respectively. We update θP every TP timesteps using the samples stored in ZP , as shown in (lines 13-21). At the end of each episode, we update θI with samples drawn from ZI according to the loss function LI defined in Eq. (5) (line 23).
3.3 STABILIZATION TECHNIQUE
Although adversarial exploration strategy is effective in collecting hard samples, it requires additional adjustments if P becomes too strong such that the collected samples are too difficult for I to learn. Overly difficult samples lead to a large variance in gradients derived from LI , which in turn cause a performance drop in I and instability in its learning process. We analyze this phenomenon in greater detail in Section 4.5. To tackle the issue, we propose a training technique that reshapes rt as follows:
rt := −|rt − δ|, (7)
Algorithm 1 Adversarial exploration strategy 1: Initialize ZP , ZI , E , and model parameters θP & θI 2: Initialize πadv(at|xt, θP ) 3: Initialize the timestep cumulative counter c = 0 4: SetNiter ,Nepisode, T , and TP 5: for iteration i = 1 toNiter do 6: for episode e = 1 toNepisode do 7: for timestep t = 0 to T do 8: P perceives xt from E , and predicts an action at according to πadv(at|xt, θP ) 9: xt+1 = E(xt, at) 10: ξ = 1[t == T ] 11: Store (xt, at, xt+1, ξ) in ZP 12: Store (xt, at, xt+1) in ZI 13: if (c% TP ) == 0 then 14: Initialize an empty batchB 15: Initialize a recurrent state ht 16: for (xt, at, xt+1, ξ) in ZP do 17: Evaluate ât = I(xt, xt+1|ht, θI) (calculated from Eq. (4)) 18: Evaluate rt(xt, at, xt+1) = LI(at, ât|θI) (calculated from Eq. (6)) 19: Store (xt, at, xt+1, rt) inB 20: Update θP with the gradient calculated from the samples ofB 21: Reset ZP 22: c = c+ 1 23: Update θI with the gradient calculated from the samples of ZI (according to Eq. (5)) 24: end
where δ is a pre-defined threshold value. This technique poses a restriction on the range of rt, driving P to gather moderate samples instead of overly hard ones. Note that the value of δ affects the learning speed and the final performance. We plot the impact of δ on the learning curve of I in Section 4.5. We further provide an example in our supplementary material to visualize the effect of this technique.
4 EXPERIMENTAL RESULTS
In this section, we present experimental results for a series of robotic tasks, and validate that (i) our method is effective in both low- and high-dimensional observation spaces; (ii) our method is effective in environments with high-dimensional action spaces; (iii) our method is more data efficient than the baseline methods; and (iv) our method is robust against action space perturbations. We first introduce our experimental setup. Then, we report experimental results of robotic arm and hand manipulation tasks. Finally, we present a comprehensive set of ablative analysis to validate our design decisions.
4.1 EXPERIMENTAL SETUP
We first describe the environments and tasks. Next, we explain the evaluation procedure and the method for collecting expert demonstrations. We then walk through the baselines used for comparison.
4.1.1 ENVIRONMENTS AND TASKS
We evaluate our method on a number of robotic arm and hand manipulation tasks via OpenAI gym (Brockman et al., 2016) environments simulated by the MuJoCo (Todorov et al., 2012) physics engine. We use the Fetch and Shadow Dexterous Hand (Plappert et al., 2018b) for the arm and hand manipulation tasks, respectively. For the arm manipulation tasks, which include FetchReach, FetchPush, FetchPickAndPlace, and FetchSlide, the imitator (i.e., the inverse dynamic model I) takes as inputs the positions and velocities of a gripper and a target object. It then infers the gripper’s action in 3-dimensional space to manipulate it. For the hand manipulation task HandReach, the imitator takes as inputs the positions and velocities of the fingers of a robotic hand, and determines the velocities of the joints to achieve the goal. In addition to low-dimensional observations (i.e., position, velocity, and gripper state), we further perform experiments for the above tasks using visual observations (i.e., high-dimensional observations) in the form of camera images taken from a third-person perspective. The detailed description of the above tasks is specified in Plappert et al. (2018b). For the detailed configurations of these tasks, please refer to our supplementary material.
4.1.2 EVALUATION PROCEDURE
The primary objective of our experiments is to demonstrate the efficiency of the proposed adversarial exploration strategy in collecting training data (in a self-supervised manner) for the imitator. We compare our strategy against a number of self-supervised data collection methods (referred to as ”baselines” or ”baseline methods”) described in Section 4.1.4. As different baseline methods employ different data collection strategies, the learning curve of the imitator also varies for different cases. For a fair comparison, the model architecture of the imitator and the amount of training data are fixed
for all cases. All of the experimental results are evaluated and averaged over 20 trials, corresponding to 20 different random initial seeds. In each trial, we train an imitator by the training data collected by a single self-supervised data collection method. At the beginning of each episode, the imitator receives a sequence of observations {x̂0, x̂1, · · · , x̂T } from a successful expert demonstration. At each timestep t, the imitator infers an action ât from an expert observation x̂t+1 and its current observation xt by Eq. (4). We periodically evaluate the imitator every 10K timesteps. The evaluation is performed by averaging the success rates of reaching x̂T over 500 episodes. The configuration of the imitator and the hyperparameters of the baselines are summarized in the supplementary material.
4.1.3 COLLECTION OF EXPERT DEMONSTRATIONS
For each task mentioned in Section 4.1.1, we first randomly configure task-relevant settings (e.g., goal position, initial state, etc.). We then collect demonstrations from non-trivial and successful episodes performed by a pre-trained expert agent (Andrychowicz et al., 2017). Please note that the collected demonstrations only contain sequences of observations. The implementation details of the expert agent and the method for filtering out trivial episodes are presented in our supplementary material.
4.1.4 BASELINE METHODS
We compare our proposed methodology with the following four baseline methods in our experiments.
• Random: This method collects training samples by random exploration. We consider it to be an important baseline because of its simplicity and prevalence in a number of research works on self-supervised IL (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018).
• Demo: This method trains the imitator directly with expert demonstrations. It serves as the performance upper bound, as the training data is the same as the testing data for this method.
• Curiosity: This method trains a DRL agent via curiosity (Pathak et al., 2017; 2018) to collect training samples. Unlike the original implementation, we replace its DRL algorithm with PPO, as training should be done on a single thread for a fair comparison with the other baselines. This is alo an important baseline due to its effectiveness in Pathak et al. (2018).
• Noise (Plappert et al., 2018a): In this method, noise is injected to the parameter space of a DRL agent to encourage exploration (Plappert et al., 2018a). Please note that its exploratory behavior relies entirely on parameter space noise, instead of using any extrinsic reward. We include this method due to its superior performance and data efficiency in many DRL tasks.
4.2 PERFORMANCE COMPARISON IN ROBOTIC ARM MANIPULATION TASKS
We compare the performance of the proposed method and the baselines on the robotic arm manipulation tasks described in Section 4.1.1. As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains. Furthermore, the imitator may not have the complete picture of the environment dynamics, increasing its difficulty to learn an inverse dynamics model. In FetchSlide, for instance, the movement of the object on the slippery surface is affected by both friction and the force exerted by the gripper. It thus motivates us to investigate whether the proposed method can help overcome the challenge. In the subsequent paragraphs, we discuss the experimental results in both low- and high-dimensional observation spaces, and plot them in Figs. 2 and 3, respectively. All of the results are obtained by following the procedure described in Section 4.1.2. The shaded regions in Figs. 2 and 3 represent the confidence intervals.
Low-dimensional observation spaces. Fig. 2 plots the learning curves for all of the methods in low-dimensional observation spaces. In all of the tasks, our method yields superior or comparable performance to the baselines except for Demo, which is trained directly with expert demonstrations. In FetchReach, it can be seen that every method achieves a success rate of 1.0. This implies that it does not require a sophisticated exploration strategy to learn an inverse dynamics model in an environment where the dynamics is relatively simple. It should be noted that although all methods reach the same final success rate, ours learns significantly faster than Demo. In contrast, in FetchPush, our method is comparable to Demo, and demonstrates superior performance to the other baselines. Our method also learns drastically faster than all the other baselines, which confirms that the proposed strategy does improve the performance and efficiency of self-supervised IL. Our method is particularly effective in tasks that require an accurate inverse dynamics model. In FetchPickAndPlace, for example, our method surpasses all the other baselines. However, all methods including Demo fail to learn a successful inverse dynamics model in FetchSlide, which suggests that it is difficult to train an imitator when the outcome of an action is not completely dependent on the action itself. It is worth noting that Curiosity loses to Random in FetchPush and FetchSlide, and Noise performs even worse than these
two methods in all of the tasks. We therefore conclude that Curiosity is not suitable for continuous control tasks, and the parameter space noise strategy cannot be directly applied to self-supervised IL. In addition to the quantitative results presented above, we further discuss the empirical results qualitatively. Please refer our supplementary material for a description of the qualitative results.
High-dimensional observation spaces. Fig. 3 plots the learning curves of all methods in highdimensional observation spaces. It can be seen that our method performs significantly better than the other baseline methods in most of the tasks, and is comparable to Demo. In FetchPickAndPlace, our method is the only one that learns a successful inverse dynamics model. Similar to the results in Fig. 2, Curiosity is no better than Random in high-dimensional observation spaces. Please note that we do not include Noise in Fig. 3 as it performs worse enough already in low-dimensional settings.
4.3 PERFORMANCE COMPARISON IN ROBOTIC HAND MANIPULATION TASK
Fig. 2 plots the learning curves for each of the methods considered. Please note that Curiosity, Noise and our method are pre-trained with 30K samples collected by random exploration, as we observe that these methods on their own suffer from large errors in an early stage during training, which prevents them from learning at all. After the first 30K samples, they are trained with data collected by their exploration strategy instead. From the results in Fig. 2, it can be seen that Demo easily stands out from the other methods as the best-performing model, surpassing them all by a considerable extent. Although our method is not as impressive as Demo, it significantly outperforms all of the other baseline methods, achieving a success rate of 0.4 while the others are still stuck at around 0.2.
The reason that the inverse dynamics models trained by the self-supervised data-collection strategies discussed in this paper (including ours and the other baselines) are not comparable to the Demo baseline in the HandReach task is primarily due to the high-dimensional action space. It is observed that the data collected by the self-supervised data-collection strategies only cover a very limited range of the state space in the HandReach environment. Therefore, the inverse dynamics models trained with these data only learn to imitate trivial poses, leading to the poor success rates presented in Fig. 2.
4.4 ROBUSTNESS TO ACTION SPACE PERTURBATION
We evaluate the performance of the imitator trained in an environment with action space perturbations to validate the robustness of our adversarial exploration strategy. In such an environment, every action taken by the DRL agent is perturbed by a Gaussian random noise, such that the training samples collected by the DRL agent are not inline with its actual intentions. Please note that we only inject noise during the training phase, as we aim to validate the robustness of the proposed data collection strategy. The scale of the injected noise is specified in the supplementary material. We report the performance change rates of various methods for different tasks in Table. 1. The performance change rate is defined as: Prperturb−ProrigProrig , where Prperturb and Prorig represent the highest success rates with and without action space perturbations, respectively. From Table. 1, it can be seen that our method retains the performance for most of the tasks, indicating that our method is robust to action space perturbations during the training phase. Please note that although Curiosity and Noise also achieve a change rate of 0% in HandReach and FetchSlide, they are not considered robust due to their poor performance in the original environment (Fig. 2). Another interesting observation is that our
method even gains some performance from action space perturbations in FetchPush and HandReach, which we leave as one of our future directions. We thus conclude that our method is robust to action space perturbations during the training phase, making it a practical option in real-world settings.
4.5 ABLATIVE ANALYSIS
In this section, we provide a set of ablative analysis. We examine the effectiveness of our method by an investigation of the training loss distribution, the stabilization technique, and the influence of δ. Please note that the value of δ is set to 1.5 by default, as described in our supplementary material.
Training loss distribution. Fig. 4 plots the probability density function (PDF) of LI (derived from Eq. (5)) by kernel density estimation (KDE) for the first 2K training batches during the training phase. The vertical axis corresponds to the probability density, while the horizontal axis represents the scale of LI . The curves Ours (w stab) and Ours (w/o stab) represent the cases where the stabilization technique described in Section 3.3 is employed or not, respectively. We additionally plot the curve Random in Fig. 4 to highlight the effectiveness of our method. It can be observed that both Ours (w stab) and Ours (w/o stab) concentrate on notably higher loss values than Random. This observation implies that adversarial exploration strategy does explore hard samples for inverse dynamics model.
Validation of the stabilization technique. We validate the proposed stabilization technique in terms of the PDF of LI and the learning curve of the imitator, and plot the results in Figs. 4 and 5, respectively. From Fig. 4, it can be observed that the modes of Ours (w stab) are lower than those of Ours (w/o stab) in most cases, implying that the stabilization technique indeed motivates the DRL agents to favor those moderately hard samples. We also observe that for each of the five cases, the mode of Ours (w stab) is close to the value of δ (plotted in a dotted line), indicating that our reward structure presented in Eq. (7) does help to regulate LI (and thus rt) to be around δ. To further demonstrate the effectiveness of the stabilization technique, we compare the learning curves of Ours (w stab) and Ours (w/o stab) in Fig. 5. It is observed that for the initial 10K samples of the five cases, the success rates of Ours (w/o stab) are comparable to those of Ours (w stab). However, their performance degrade drastically during the rest of the training phase. This observation confirms that the stabilization technique does contribute significantly to our adversarial exploration strategy.
Although most of the DRL works suggest that the rewards should be re-scaled or clipped within a range (e.g., from -1 to 1), the unbounded rewards do not introduce any issues during the training process of our experiments. The empirical rationale is that the rewards received by the DRL agent are regulated by Eq. (7) to be around δ, as described in Section 4.5 and depicted in Fig. 4. Without the stabilization technique, however, the learning curves of the inverse dynamics model degrade drastically (as illustrated in Fig. 2), even if the reward clipping technique is applied.
Influence of δ. Fig. 6 compares the learning curves of the imitator for different values of δ. For instance, Ours(0.1) corresponds to δ = 0.1. It is observed that for most of the tasks, the success rates drop when δ is set to an overly high or low value (e.g., 100.0 or 0.0), suggesting that a moderate value of δ is necessary for the stabilization technique. The value of δ can be adjusted dynamically by the adaptive scaling technique presented in Plappert et al. (2018a), which is left as our future direction.
From the analysis presented above, we conclude that the proposed adversarial exploration strategy is effective in collecting difficult training data for the imitator. The analysis also validates that our
stabilization technique indeed leads to superior performance, and is capable of guiding the DRL agent to collect moderately hard samples. This enables the imitator to pursue a stable learning curve.
5 CONCLUSION
In this paper, we presented an adversarial exploration strategy, which consists of a DRL agent and an inverse dynamics model competing with each other for self-supervised IL. The former is encouraged to adversarially collect difficult training data for the latter, such that the training efficiency of the latter is significantly enhanced. Experimental results demonstrated that our method substantially improved the data collection efficiency in multiple robotic arm and hand manipulation tasks, and boosted the performance of the inverse dynamics model in both low- and high-dimensional observation spaces. In addition, we validated that our method is generalizable to environments with high-dimensional action spaces. Moreover, we showed that our method is robust to action space perturbations. Finally, we provided a set of ablative analysis to validate the effectiveness for each of our design decisions. | 1. What is the main contribution of the paper on self-supervised imitation learning?
2. What are the strengths of the proposed method, particularly in its effectiveness in block manipulation tasks?
3. What are the limitations and potential failures of the method in other types of environments?
4. How do the environments selected for evaluation impact the performance of the inverse dynamics model? Are there any specific environment types that may not be well-suited for this method?
5. Why do the success rates of the self-supervised methods and the random baseline differ in Figure 2, even when pre-trained using 30k random samples?
6. What is the significance of the stabilizer value delta in the method's performance, and how does it affect the reward system?
7. How does the choice of delta impact the method's performance, and what is the optimal value for delta?
8. Can you provide more information or explanations regarding the differences in performance between the "no stabilizer" case and the case with delta=3, despite having similar peak PDF values? | Review | Review
This paper presents a system for self-supervised imitation learning using a RL agent that is rewarded for finding actions that the system does not yet predict well given the current state. More precisely, an imitation learner I is trained to predict an action A given a desired observation state transition xt->xt+1; the training samples for I are generated using a RL policy that yields an action A to train given xt (a physics engine evaluates xt+1 from xt and A). The RL policy is rewarded using the loss incurred by I's prediction of A, so that moderately high loss values produce highest reward. In this way, the RL agent learns to produce effective training samples that are not too easy or hard for the learner. The method is evaluated on five block manipulation tasks, comparing to training samples generated by other recent self-supervised methods, as well as those found using a pretrained expert model for each task.
Overall, this method exploration seems quite effective on the tasks evaluated. I'd be curious to know more about the limits and failures of the method, e.g. in other types of environments.
Additional questions:
- p.2 mentions that the environments "are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions". What sort of environments would be less well fit? Are there any failure cases of this method where other baselines perform better?
- sec 4.3 notes that the self-supervised methods are pre-trained using 30k random samples before switching to the exploration policy, but in Fig 2, the success rates do not coincide between the systems and the random baseline, at either samples=0 or samples=30k --- should they? if not, what differences caused this?
- figs. 4, 5 and 6 all relate to the stabilizer value delta, and I have a couple questions here: (i) for what delta does performance start to degrade? At delta=inf, I think it should be the same as no stabilizer, while at delta=0 is the exact opposite reward (i.e. negative loss, easy samples). (ii) delta=3 is evaluated, and performance looks decent for this in fig 6 --- but fig 4 shows that the peak PDF of "no stabilizer" is around 3 as well, yet "no stabilizer" performs poorly in Fig 5. Why is this, if it tends to produce actions with loss around 3 in both cases? |
ICLR | Title
Adversarial Exploration Strategy for Self-Supervised Imitation Learning
Abstract
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
1 INTRODUCTION
Over the past decade, imitation learning (IL) has been successfully applied to a wide range of domains, including robot learning (Englert et al., 2013; Schulman et al., 2013), autonomous navigation (Choudhury et al., 2017; Ross et al., 2013), manipulation tasks (Nair et al., 2017; Prieur et al., 2012), and self-driving cars (Codevilla et al., 2018). Traditionally, IL aims to train an imitator to learn a control policy π only from expert demonstrations. The imitator is typically presented with multiple demonstrations during the training phase, with an aim to distill them into π. To learn π effectively and efficiently, a large set of high-quality demonstrations are necessary. This is especially prevalent in current state-of-the-art IL algorithms, such as dataset aggregation (DAgger) (Ross et al., 2011) and generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Although these approaches have been the dominant algorithms in IL, a major bottleneck for them is their reliance on high-quality demonstrations, which often require extensive supervision from human experts. In addition, a serious flaw in the learned policy π is its tendency to overfit to demonstration data, preventing it from generalizing to new ones. To overcome the aforementioned challenges in IL, a number of methods have been investigated to enhance the generalizability and data efficiency, or reduce the degree of human supervision. Initial efforts in this direction were based on the idea of meta learning (Duan et al., 2017; Finn et al., 2017; Yu et al., 2018), in which the imitator is trained from a meta learner that is able to quickly learn a new task with only a few set of demonstrations. However, such schemes still require training the meta-learner with tremendous amount of time and demonstration data, leaving much room for improvement. Thus, a rapidly-growing body of literature based on the concept of using forward/inverse dynamics models to learn π within an environment in a self-supervised fashion (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018) has emerged in the past few years. One key advantage of the concept is that it provides an autonomous way for preparing training data, removing the need of human intervention. In this paper, we call it self-supervised IL.
Self-supervised IL allows an imitator to collect training data by itself instead of using predefined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. Although the core principles of self-supervised IL are straightforward and have been exploited in many fields (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2017; 2018), recent research efforts have been dedicated
to addressing the challenges of multi-modality and multi-step planning. For example, the use of forward consistency loss and forward regularizer have been extensively investigated to enhance the task performance of the imitator (Agrawal et al., 2016; Pathak et al., 2018). This becomes especially essential when the lengths of trajectories grow and demonstration samples are sparse, as multiple paths may co-exist to lead the imitator from its initial observation to the goal observation. The issue of multi-step planning has also drawn a lot of attention from researchers, and is usually tackled by recurrent neural networks (RNNs) and step-by-step demonstrations (Nair et al., 2017; Pathak et al., 2018). The above self-supervised IL approaches report promising results, however, most of them are limited in applicability due to several drawbacks. First, traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection results in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. Second, human bias in data sampling range tailored to specific interesting configurations is often employed (Agrawal et al., 2016; Nair et al., 2017). Although a more general exploration strategy called curiosity-driven exploration was later proposed in Pathak et al. (2017), it focuses only on exploration in states novel to the forward dynamics model, rather than those directly influential to the inverse dynamics model. Furthermore, it does not discuss the applicability to continuous control domains, and fails in high dimensional action spaces according to our experiments in Section 4. Unlike the approaches discussed above, we do not propose to deal with multi-modality or multi-step planning. Instead, we focus our attention on improving the overall quality of the collected samples in the context of self-supervised IL. This motivates us to equip the model with the necessary knowledge to explore the environment in an efficient and effective fashion.
In this paper, we propose a straightforward and efficient self-supervised IL scheme, called adversarial exploration strategy, which motivates exploration of an environment in a self-supervised manner (i.e., without any extrinsic reward or human demonstration). Inspired by Pinto et al. (2017); Shioya et al. (2018); Sukhbaatar et al. (2018), we implement the proposed strategy by jointly training a deep reinforcement learning (DRL) agent and an inverse dynamics model competing with each other. The former explores the environment to collect training data for the latter, and receives rewards from the latter if the data samples are considered difficult. The latter is trained with the training data collected by the former, and only generates rewards when it fails to predict the true actions performed by the former. In such an adversarial setting, the DRL agent is rewarded only for the failure of the inverse dynamics model. Therefore, the DRL agent learns to sample hard examples to maximize the chances to fail the inverse dynamics model. On the other hand, the inverse dynamics model learns to be robust to the hard examples collected by the DRL agent by minimizing the probability of failures. As a result, as the inverse dynamics model becomes stronger, the DRL agent is also incentivized to search for harder examples to obtain rewards. Overly hard examples, however, may lead to biased exploration and cause instability of the learning process. In order to stabilize the learning curve of the inverse dynamics model, we further propose a reward structure such that the DRL agent is encouraged to explore moderately hard examples for the inverse dynamics model, but refraining from too difficult ones for the latter to learn. The self-regulating feedback structure between the DRL agent and the inverse dynamics model enables them to automatically construct a curriculum for exploration.
We perform extensive experiments to validate adversarial exploration strategy on multiple OpenAI gym (Brockman et al., 2016) robotic arm and hand manipulation task environments simulated by the MuJoCo physics engine (Todorov et al., 2012), including FetchReach, FetchPush, FetchPickAndPlace, FetchSlide, and HandReach. These environments are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions to transition the robotic arms and hands to target observations. We examine the effectiveness of our method by comparing it against a number of self-supervised IL schemes. The experimental results show that our method is more effective and data-efficient than the other self-supervised IL schemes for both low- and high-dimensional observation spaces, as well as in environments with high-dimensional action spaces. We also demonstrate that in most of the cases the performance of the inverse dynamics model trained by our method is comparable to that directly trained with expert demonstrations. The above observations suggest that our method is superior to the other self-supervised IL schemes even in the absence of human priors. We further evaluate our method on environments with action space perturbations, and show that our method is able to achieve satisfactory success rates. To justify each of our design decisions, we provide a comprehensive set of ablative analysis and discuss their implications. The contributions of this work are summarized as follows:
• We introduce an adversarial exploration strategy for self-supervised IL. It consists of a DRL agent and an inverse dynamics model developed for efficient exploration and data collection.
• We employ a competitive scheme for the DRL agent and the inverse dynamics model, enabling them to automatically construct a curriculum for exploration of observation space.
• We introduce a reward structure for the proposed scheme to stabilize the training process. • We demonstrate the proposed method and compare it with a number of baselines for multiple
robotic arm and hand manipulation tasks in both low- and high-dimensional state spaces. • We validate that our method is generalizable to tasks with high-dimensional action spaces.
The remainder of this paper is organized as follows. Section 2 introduces background material. Section 3 describes the proposed adversarial exploration strategy in detail. Section 4 reports the experimental results, and provides an in-depth ablative analysis of our method. Section 5 concludes.
2 BACKGROUND
In this section, we briefly review DRL, policy gradient methods, as well as inverse dynamics model.
2.1 DEEP REINFORCEMENT LEARNING AND POLICY GRADIENT METHODS
DRL trains an agent to interact with an environment E . At each timestep t, the agent receives an observation xt ∈ X , where X is the observation space of E . It then takes an action at from the action space A based on its current policy π, receives a reward r, and transitions to the next observation x′. The policy π is represented by a deep neural network with parameters θ, and is expressed as π(a|x, θ). The goal of the agent is to learn a policy to maximize the discounted sum of rewards Gt:
Gt = T∑ τ=t γτ−tr(xτ , aτ ), (1)
where t is the current timestep, γ ∈ (0, 1] the discount factor, and T the horizon. Policy gradient methods (Mnih et al., 2016; Sutton et al., 2000; Williams, 1992) are a class of RL techniques that directly optimize the parameters of a stochastic policy approximator using policy gradients. Although these methods have achieved remarkable success in a variety of domains, the high variance of gradient estimates has been a major challenge. Trust region policy optimization (TRPO) (Schulman et al., 2015) circumvented this problem by applying a trust-region constraint to the scale of policy updates. However, TRPO is a second-order algorithm, which is relatively complicated and not compatible with architectures that embrace noise or parameter sharing (Schulman et al., 2017). In this paper, we employ a more recent family of policy gradient methods, called proximal policy optimization (PPO) (Schulman et al., 2017). PPO is an approximation to TRPO, which similarly prevents large changes to the policy between updates, but requires only first-order optimization. PPO is superior in its generalizability and sample complexity while retaining the stability and reliability of TRPO 1.
2.2 INVERSE DYNAMICS MODEL
An inverse dynamics model I takes as input a pair of observations (x, x′), and predicts the action â required to reach the next observation x′ from the current observation x. It is formally expressed as:
â = I(x, x′|θI), (2)
where (x, x′) are sampled from the collected data, and θI represents the trainable parameters of I . During the training phase, θI is iteratively updated to minimize the loss function LI , expressed as:
LI(a, â|θI) = d(a, â), (3)
where d is a distance metric, and a the ground truth action. During the testing phase, a sequence of observations {x̂0, x̂1, · · · , x̂T } is first captured from an expert demonstration. A pair of observations (x̂t, x̂t+1) is then fed into I at each timestep t. Starting from x̂0, the objective of I is to predict a sequence of actions {â0, â1, · · · , âT−1} and transition the final observation x̂T as close as possible.
3 METHODOLOGY
In this section, we first describe the proposed adversarial exploration strategy. We then explain the training methodology in detail. Finally, we discuss a technique for stabilizing the training process.
3.1 ADVERSARIAL EXPLORATION STRATEGY
Fig. 1 shows a framework that illustrates the proposed adversarial exploration strategy, which includes a DRL agent P and an inverse dynamics model I . Assume that Φπ : {x0, a0, x1, a1 · · · , xT } is the
1For more details on PPO, please refer to supplementary material S.2.
sequence of observations and actions generated by P as it explores E using a policy π. At each timestep t, P collects a 3-tuple training sample (xt, at, xt+1) for I , while I predicts an action ât and generates a reward rt for P . In this work, I is modified from Eq. (2) to include an additional hidden vector ht, which recurrently encodes the information of the past observations. I is thus expressed as:
ât = I(xt, xt+1|ht, θI) ht = f(ht−1, xt),
(4)
where f(·) denotes the recurrent function. θI is iteratively updated to minimize LI , formulated as:
min θI LI(at, ât|θI) = min θI β||at − ât||2, (5)
where β is a scaling constant. We employ mean squared error β||at − ât||2 as the distance metric d(at, ât), since we only consider continuous control domains in this paper. It can be replaced with a cross-entropy loss for discrete control tasks. We directly use LI as the reward rt for P , expressed as:
rt(xt, at, xt+1) = LI(at, ât|θI) = β||at − I(xt, xt+1|ht, θI)||2. (6)
Our method targets at improving both the quality and efficiency of the data collection process performed by P , as well as the performance of I . Therefore, the goal of the proposed framework is twofold. First, P has to learn an adversarial policy πadv(at|xt) such that its cumulated discounted rewards Gt|πadv = ∑T τ=t γ
τ−trt(xτ , aτ , xτ+1) is maximized. Second, I requires to learn an optimal θI such that Eq. (6) is minimized. Minimizing LI (i.e., rt) leads to decreased Gt|πadv , forcing P to enhance πadv to explore more difficult samples to increase Gt|πadv . This implies that P is motivated to focus on I’s weak points, instead of randomly collecting ineffective training samples. Training I with hard samples not only accelerates its learning progress, but also helps to boost its performance.
3.2 TRAINING METHODOLOGY
We describe the training methodology of our adversarial exploration strategy by a pseudocode presented in Algorithm 1. Assume that P ’s policy πadv is parameterized by a set of trainable parameters θP , and is represented as πadv(at|xt, θP ). We create two buffers ZP and ZI for storing the training samples of P and I , respectively. In the beginning, ZP , ZI , E , θP , θI , πadv , as well as a timestep cumulative counter c are initialized. A number of hyperparameters are set to appropriate values, including the number of iterations Niter, the number of episodes Nepisode, the horizon T , as well as the update period TP of θP . At each timestep t, P perceives the current observation xt from E , takes an action at according to πadv(at|xt, θP ), and receives the next observation xt+1 and a termination indicator ξ (lines 9-11). ξ is set to 1 only when t equals T , otherwise it is set to 0. We then store (xt, at, xt+1, ξ) and (xt, at, xt+1) in ZP and ZI , respectively. We update θP every TP timesteps using the samples stored in ZP , as shown in (lines 13-21). At the end of each episode, we update θI with samples drawn from ZI according to the loss function LI defined in Eq. (5) (line 23).
3.3 STABILIZATION TECHNIQUE
Although adversarial exploration strategy is effective in collecting hard samples, it requires additional adjustments if P becomes too strong such that the collected samples are too difficult for I to learn. Overly difficult samples lead to a large variance in gradients derived from LI , which in turn cause a performance drop in I and instability in its learning process. We analyze this phenomenon in greater detail in Section 4.5. To tackle the issue, we propose a training technique that reshapes rt as follows:
rt := −|rt − δ|, (7)
Algorithm 1 Adversarial exploration strategy 1: Initialize ZP , ZI , E , and model parameters θP & θI 2: Initialize πadv(at|xt, θP ) 3: Initialize the timestep cumulative counter c = 0 4: SetNiter ,Nepisode, T , and TP 5: for iteration i = 1 toNiter do 6: for episode e = 1 toNepisode do 7: for timestep t = 0 to T do 8: P perceives xt from E , and predicts an action at according to πadv(at|xt, θP ) 9: xt+1 = E(xt, at) 10: ξ = 1[t == T ] 11: Store (xt, at, xt+1, ξ) in ZP 12: Store (xt, at, xt+1) in ZI 13: if (c% TP ) == 0 then 14: Initialize an empty batchB 15: Initialize a recurrent state ht 16: for (xt, at, xt+1, ξ) in ZP do 17: Evaluate ât = I(xt, xt+1|ht, θI) (calculated from Eq. (4)) 18: Evaluate rt(xt, at, xt+1) = LI(at, ât|θI) (calculated from Eq. (6)) 19: Store (xt, at, xt+1, rt) inB 20: Update θP with the gradient calculated from the samples ofB 21: Reset ZP 22: c = c+ 1 23: Update θI with the gradient calculated from the samples of ZI (according to Eq. (5)) 24: end
where δ is a pre-defined threshold value. This technique poses a restriction on the range of rt, driving P to gather moderate samples instead of overly hard ones. Note that the value of δ affects the learning speed and the final performance. We plot the impact of δ on the learning curve of I in Section 4.5. We further provide an example in our supplementary material to visualize the effect of this technique.
4 EXPERIMENTAL RESULTS
In this section, we present experimental results for a series of robotic tasks, and validate that (i) our method is effective in both low- and high-dimensional observation spaces; (ii) our method is effective in environments with high-dimensional action spaces; (iii) our method is more data efficient than the baseline methods; and (iv) our method is robust against action space perturbations. We first introduce our experimental setup. Then, we report experimental results of robotic arm and hand manipulation tasks. Finally, we present a comprehensive set of ablative analysis to validate our design decisions.
4.1 EXPERIMENTAL SETUP
We first describe the environments and tasks. Next, we explain the evaluation procedure and the method for collecting expert demonstrations. We then walk through the baselines used for comparison.
4.1.1 ENVIRONMENTS AND TASKS
We evaluate our method on a number of robotic arm and hand manipulation tasks via OpenAI gym (Brockman et al., 2016) environments simulated by the MuJoCo (Todorov et al., 2012) physics engine. We use the Fetch and Shadow Dexterous Hand (Plappert et al., 2018b) for the arm and hand manipulation tasks, respectively. For the arm manipulation tasks, which include FetchReach, FetchPush, FetchPickAndPlace, and FetchSlide, the imitator (i.e., the inverse dynamic model I) takes as inputs the positions and velocities of a gripper and a target object. It then infers the gripper’s action in 3-dimensional space to manipulate it. For the hand manipulation task HandReach, the imitator takes as inputs the positions and velocities of the fingers of a robotic hand, and determines the velocities of the joints to achieve the goal. In addition to low-dimensional observations (i.e., position, velocity, and gripper state), we further perform experiments for the above tasks using visual observations (i.e., high-dimensional observations) in the form of camera images taken from a third-person perspective. The detailed description of the above tasks is specified in Plappert et al. (2018b). For the detailed configurations of these tasks, please refer to our supplementary material.
4.1.2 EVALUATION PROCEDURE
The primary objective of our experiments is to demonstrate the efficiency of the proposed adversarial exploration strategy in collecting training data (in a self-supervised manner) for the imitator. We compare our strategy against a number of self-supervised data collection methods (referred to as ”baselines” or ”baseline methods”) described in Section 4.1.4. As different baseline methods employ different data collection strategies, the learning curve of the imitator also varies for different cases. For a fair comparison, the model architecture of the imitator and the amount of training data are fixed
for all cases. All of the experimental results are evaluated and averaged over 20 trials, corresponding to 20 different random initial seeds. In each trial, we train an imitator by the training data collected by a single self-supervised data collection method. At the beginning of each episode, the imitator receives a sequence of observations {x̂0, x̂1, · · · , x̂T } from a successful expert demonstration. At each timestep t, the imitator infers an action ât from an expert observation x̂t+1 and its current observation xt by Eq. (4). We periodically evaluate the imitator every 10K timesteps. The evaluation is performed by averaging the success rates of reaching x̂T over 500 episodes. The configuration of the imitator and the hyperparameters of the baselines are summarized in the supplementary material.
4.1.3 COLLECTION OF EXPERT DEMONSTRATIONS
For each task mentioned in Section 4.1.1, we first randomly configure task-relevant settings (e.g., goal position, initial state, etc.). We then collect demonstrations from non-trivial and successful episodes performed by a pre-trained expert agent (Andrychowicz et al., 2017). Please note that the collected demonstrations only contain sequences of observations. The implementation details of the expert agent and the method for filtering out trivial episodes are presented in our supplementary material.
4.1.4 BASELINE METHODS
We compare our proposed methodology with the following four baseline methods in our experiments.
• Random: This method collects training samples by random exploration. We consider it to be an important baseline because of its simplicity and prevalence in a number of research works on self-supervised IL (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018).
• Demo: This method trains the imitator directly with expert demonstrations. It serves as the performance upper bound, as the training data is the same as the testing data for this method.
• Curiosity: This method trains a DRL agent via curiosity (Pathak et al., 2017; 2018) to collect training samples. Unlike the original implementation, we replace its DRL algorithm with PPO, as training should be done on a single thread for a fair comparison with the other baselines. This is alo an important baseline due to its effectiveness in Pathak et al. (2018).
• Noise (Plappert et al., 2018a): In this method, noise is injected to the parameter space of a DRL agent to encourage exploration (Plappert et al., 2018a). Please note that its exploratory behavior relies entirely on parameter space noise, instead of using any extrinsic reward. We include this method due to its superior performance and data efficiency in many DRL tasks.
4.2 PERFORMANCE COMPARISON IN ROBOTIC ARM MANIPULATION TASKS
We compare the performance of the proposed method and the baselines on the robotic arm manipulation tasks described in Section 4.1.1. As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains. Furthermore, the imitator may not have the complete picture of the environment dynamics, increasing its difficulty to learn an inverse dynamics model. In FetchSlide, for instance, the movement of the object on the slippery surface is affected by both friction and the force exerted by the gripper. It thus motivates us to investigate whether the proposed method can help overcome the challenge. In the subsequent paragraphs, we discuss the experimental results in both low- and high-dimensional observation spaces, and plot them in Figs. 2 and 3, respectively. All of the results are obtained by following the procedure described in Section 4.1.2. The shaded regions in Figs. 2 and 3 represent the confidence intervals.
Low-dimensional observation spaces. Fig. 2 plots the learning curves for all of the methods in low-dimensional observation spaces. In all of the tasks, our method yields superior or comparable performance to the baselines except for Demo, which is trained directly with expert demonstrations. In FetchReach, it can be seen that every method achieves a success rate of 1.0. This implies that it does not require a sophisticated exploration strategy to learn an inverse dynamics model in an environment where the dynamics is relatively simple. It should be noted that although all methods reach the same final success rate, ours learns significantly faster than Demo. In contrast, in FetchPush, our method is comparable to Demo, and demonstrates superior performance to the other baselines. Our method also learns drastically faster than all the other baselines, which confirms that the proposed strategy does improve the performance and efficiency of self-supervised IL. Our method is particularly effective in tasks that require an accurate inverse dynamics model. In FetchPickAndPlace, for example, our method surpasses all the other baselines. However, all methods including Demo fail to learn a successful inverse dynamics model in FetchSlide, which suggests that it is difficult to train an imitator when the outcome of an action is not completely dependent on the action itself. It is worth noting that Curiosity loses to Random in FetchPush and FetchSlide, and Noise performs even worse than these
two methods in all of the tasks. We therefore conclude that Curiosity is not suitable for continuous control tasks, and the parameter space noise strategy cannot be directly applied to self-supervised IL. In addition to the quantitative results presented above, we further discuss the empirical results qualitatively. Please refer our supplementary material for a description of the qualitative results.
High-dimensional observation spaces. Fig. 3 plots the learning curves of all methods in highdimensional observation spaces. It can be seen that our method performs significantly better than the other baseline methods in most of the tasks, and is comparable to Demo. In FetchPickAndPlace, our method is the only one that learns a successful inverse dynamics model. Similar to the results in Fig. 2, Curiosity is no better than Random in high-dimensional observation spaces. Please note that we do not include Noise in Fig. 3 as it performs worse enough already in low-dimensional settings.
4.3 PERFORMANCE COMPARISON IN ROBOTIC HAND MANIPULATION TASK
Fig. 2 plots the learning curves for each of the methods considered. Please note that Curiosity, Noise and our method are pre-trained with 30K samples collected by random exploration, as we observe that these methods on their own suffer from large errors in an early stage during training, which prevents them from learning at all. After the first 30K samples, they are trained with data collected by their exploration strategy instead. From the results in Fig. 2, it can be seen that Demo easily stands out from the other methods as the best-performing model, surpassing them all by a considerable extent. Although our method is not as impressive as Demo, it significantly outperforms all of the other baseline methods, achieving a success rate of 0.4 while the others are still stuck at around 0.2.
The reason that the inverse dynamics models trained by the self-supervised data-collection strategies discussed in this paper (including ours and the other baselines) are not comparable to the Demo baseline in the HandReach task is primarily due to the high-dimensional action space. It is observed that the data collected by the self-supervised data-collection strategies only cover a very limited range of the state space in the HandReach environment. Therefore, the inverse dynamics models trained with these data only learn to imitate trivial poses, leading to the poor success rates presented in Fig. 2.
4.4 ROBUSTNESS TO ACTION SPACE PERTURBATION
We evaluate the performance of the imitator trained in an environment with action space perturbations to validate the robustness of our adversarial exploration strategy. In such an environment, every action taken by the DRL agent is perturbed by a Gaussian random noise, such that the training samples collected by the DRL agent are not inline with its actual intentions. Please note that we only inject noise during the training phase, as we aim to validate the robustness of the proposed data collection strategy. The scale of the injected noise is specified in the supplementary material. We report the performance change rates of various methods for different tasks in Table. 1. The performance change rate is defined as: Prperturb−ProrigProrig , where Prperturb and Prorig represent the highest success rates with and without action space perturbations, respectively. From Table. 1, it can be seen that our method retains the performance for most of the tasks, indicating that our method is robust to action space perturbations during the training phase. Please note that although Curiosity and Noise also achieve a change rate of 0% in HandReach and FetchSlide, they are not considered robust due to their poor performance in the original environment (Fig. 2). Another interesting observation is that our
method even gains some performance from action space perturbations in FetchPush and HandReach, which we leave as one of our future directions. We thus conclude that our method is robust to action space perturbations during the training phase, making it a practical option in real-world settings.
4.5 ABLATIVE ANALYSIS
In this section, we provide a set of ablative analysis. We examine the effectiveness of our method by an investigation of the training loss distribution, the stabilization technique, and the influence of δ. Please note that the value of δ is set to 1.5 by default, as described in our supplementary material.
Training loss distribution. Fig. 4 plots the probability density function (PDF) of LI (derived from Eq. (5)) by kernel density estimation (KDE) for the first 2K training batches during the training phase. The vertical axis corresponds to the probability density, while the horizontal axis represents the scale of LI . The curves Ours (w stab) and Ours (w/o stab) represent the cases where the stabilization technique described in Section 3.3 is employed or not, respectively. We additionally plot the curve Random in Fig. 4 to highlight the effectiveness of our method. It can be observed that both Ours (w stab) and Ours (w/o stab) concentrate on notably higher loss values than Random. This observation implies that adversarial exploration strategy does explore hard samples for inverse dynamics model.
Validation of the stabilization technique. We validate the proposed stabilization technique in terms of the PDF of LI and the learning curve of the imitator, and plot the results in Figs. 4 and 5, respectively. From Fig. 4, it can be observed that the modes of Ours (w stab) are lower than those of Ours (w/o stab) in most cases, implying that the stabilization technique indeed motivates the DRL agents to favor those moderately hard samples. We also observe that for each of the five cases, the mode of Ours (w stab) is close to the value of δ (plotted in a dotted line), indicating that our reward structure presented in Eq. (7) does help to regulate LI (and thus rt) to be around δ. To further demonstrate the effectiveness of the stabilization technique, we compare the learning curves of Ours (w stab) and Ours (w/o stab) in Fig. 5. It is observed that for the initial 10K samples of the five cases, the success rates of Ours (w/o stab) are comparable to those of Ours (w stab). However, their performance degrade drastically during the rest of the training phase. This observation confirms that the stabilization technique does contribute significantly to our adversarial exploration strategy.
Although most of the DRL works suggest that the rewards should be re-scaled or clipped within a range (e.g., from -1 to 1), the unbounded rewards do not introduce any issues during the training process of our experiments. The empirical rationale is that the rewards received by the DRL agent are regulated by Eq. (7) to be around δ, as described in Section 4.5 and depicted in Fig. 4. Without the stabilization technique, however, the learning curves of the inverse dynamics model degrade drastically (as illustrated in Fig. 2), even if the reward clipping technique is applied.
Influence of δ. Fig. 6 compares the learning curves of the imitator for different values of δ. For instance, Ours(0.1) corresponds to δ = 0.1. It is observed that for most of the tasks, the success rates drop when δ is set to an overly high or low value (e.g., 100.0 or 0.0), suggesting that a moderate value of δ is necessary for the stabilization technique. The value of δ can be adjusted dynamically by the adaptive scaling technique presented in Plappert et al. (2018a), which is left as our future direction.
From the analysis presented above, we conclude that the proposed adversarial exploration strategy is effective in collecting difficult training data for the imitator. The analysis also validates that our
stabilization technique indeed leads to superior performance, and is capable of guiding the DRL agent to collect moderately hard samples. This enables the imitator to pursue a stable learning curve.
5 CONCLUSION
In this paper, we presented an adversarial exploration strategy, which consists of a DRL agent and an inverse dynamics model competing with each other for self-supervised IL. The former is encouraged to adversarially collect difficult training data for the latter, such that the training efficiency of the latter is significantly enhanced. Experimental results demonstrated that our method substantially improved the data collection efficiency in multiple robotic arm and hand manipulation tasks, and boosted the performance of the inverse dynamics model in both low- and high-dimensional observation spaces. In addition, we validated that our method is generalizable to environments with high-dimensional action spaces. Moreover, we showed that our method is robust to action space perturbations. Finally, we provided a set of ablative analysis to validate the effectiveness for each of our design decisions. | 1. What is the main contribution of the paper in deep reinforcement learning?
2. What are the strengths and weaknesses of the proposed exploration strategy?
3. How does the reviewer assess the scalability of the method in large state spaces?
4. Does the method account for the understanding of the environment beyond the controllable aspects?
5. Are there any concerns regarding the boundedness of the exploration bonus? | Review | Review
The paper proposes an exploration strategy for deep reinforcement learning agent in continuous action spaces. The core of the method is to train an inverse local model (a model that predicts the action that was taken from a pair of states) and its errors as an exploration bonus for a policy gradient agent. The intuition is that its a good self-regulating strategy similar to curiosity that leads the agents towards states that are less known by the inverse model. Seeing these states improves the . There are experiments run on the OpenAI gym comparing to other models of curiosity. The paper is well written and clear for the most part.
pros:
- the paper seems novel and results are promising
- easy to implement
cons:
- seems unstable and not clear how it would scale in a large state space where most states are going to be very difficult to learn about in the beginning like a humanoid body.
- only accounts for the immediately controllable aspects of the environment which doesn't seem to be the hard part. Understanding the rest of the environment and its relationship to the controllable part of the state seems beyond the scope of this model. Nonetheless I can imagine it helping with initial random motions.
- from (6) the bonus seems to be unbounded and (7) doesn't seem to fix that. Is that not an issue in general ? Any intuition about that ? |
ICLR | Title
Adversarial Exploration Strategy for Self-Supervised Imitation Learning
Abstract
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
1 INTRODUCTION
Over the past decade, imitation learning (IL) has been successfully applied to a wide range of domains, including robot learning (Englert et al., 2013; Schulman et al., 2013), autonomous navigation (Choudhury et al., 2017; Ross et al., 2013), manipulation tasks (Nair et al., 2017; Prieur et al., 2012), and self-driving cars (Codevilla et al., 2018). Traditionally, IL aims to train an imitator to learn a control policy π only from expert demonstrations. The imitator is typically presented with multiple demonstrations during the training phase, with an aim to distill them into π. To learn π effectively and efficiently, a large set of high-quality demonstrations are necessary. This is especially prevalent in current state-of-the-art IL algorithms, such as dataset aggregation (DAgger) (Ross et al., 2011) and generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Although these approaches have been the dominant algorithms in IL, a major bottleneck for them is their reliance on high-quality demonstrations, which often require extensive supervision from human experts. In addition, a serious flaw in the learned policy π is its tendency to overfit to demonstration data, preventing it from generalizing to new ones. To overcome the aforementioned challenges in IL, a number of methods have been investigated to enhance the generalizability and data efficiency, or reduce the degree of human supervision. Initial efforts in this direction were based on the idea of meta learning (Duan et al., 2017; Finn et al., 2017; Yu et al., 2018), in which the imitator is trained from a meta learner that is able to quickly learn a new task with only a few set of demonstrations. However, such schemes still require training the meta-learner with tremendous amount of time and demonstration data, leaving much room for improvement. Thus, a rapidly-growing body of literature based on the concept of using forward/inverse dynamics models to learn π within an environment in a self-supervised fashion (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018) has emerged in the past few years. One key advantage of the concept is that it provides an autonomous way for preparing training data, removing the need of human intervention. In this paper, we call it self-supervised IL.
Self-supervised IL allows an imitator to collect training data by itself instead of using predefined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. Although the core principles of self-supervised IL are straightforward and have been exploited in many fields (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2017; 2018), recent research efforts have been dedicated
to addressing the challenges of multi-modality and multi-step planning. For example, the use of forward consistency loss and forward regularizer have been extensively investigated to enhance the task performance of the imitator (Agrawal et al., 2016; Pathak et al., 2018). This becomes especially essential when the lengths of trajectories grow and demonstration samples are sparse, as multiple paths may co-exist to lead the imitator from its initial observation to the goal observation. The issue of multi-step planning has also drawn a lot of attention from researchers, and is usually tackled by recurrent neural networks (RNNs) and step-by-step demonstrations (Nair et al., 2017; Pathak et al., 2018). The above self-supervised IL approaches report promising results, however, most of them are limited in applicability due to several drawbacks. First, traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection results in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. Second, human bias in data sampling range tailored to specific interesting configurations is often employed (Agrawal et al., 2016; Nair et al., 2017). Although a more general exploration strategy called curiosity-driven exploration was later proposed in Pathak et al. (2017), it focuses only on exploration in states novel to the forward dynamics model, rather than those directly influential to the inverse dynamics model. Furthermore, it does not discuss the applicability to continuous control domains, and fails in high dimensional action spaces according to our experiments in Section 4. Unlike the approaches discussed above, we do not propose to deal with multi-modality or multi-step planning. Instead, we focus our attention on improving the overall quality of the collected samples in the context of self-supervised IL. This motivates us to equip the model with the necessary knowledge to explore the environment in an efficient and effective fashion.
In this paper, we propose a straightforward and efficient self-supervised IL scheme, called adversarial exploration strategy, which motivates exploration of an environment in a self-supervised manner (i.e., without any extrinsic reward or human demonstration). Inspired by Pinto et al. (2017); Shioya et al. (2018); Sukhbaatar et al. (2018), we implement the proposed strategy by jointly training a deep reinforcement learning (DRL) agent and an inverse dynamics model competing with each other. The former explores the environment to collect training data for the latter, and receives rewards from the latter if the data samples are considered difficult. The latter is trained with the training data collected by the former, and only generates rewards when it fails to predict the true actions performed by the former. In such an adversarial setting, the DRL agent is rewarded only for the failure of the inverse dynamics model. Therefore, the DRL agent learns to sample hard examples to maximize the chances to fail the inverse dynamics model. On the other hand, the inverse dynamics model learns to be robust to the hard examples collected by the DRL agent by minimizing the probability of failures. As a result, as the inverse dynamics model becomes stronger, the DRL agent is also incentivized to search for harder examples to obtain rewards. Overly hard examples, however, may lead to biased exploration and cause instability of the learning process. In order to stabilize the learning curve of the inverse dynamics model, we further propose a reward structure such that the DRL agent is encouraged to explore moderately hard examples for the inverse dynamics model, but refraining from too difficult ones for the latter to learn. The self-regulating feedback structure between the DRL agent and the inverse dynamics model enables them to automatically construct a curriculum for exploration.
We perform extensive experiments to validate adversarial exploration strategy on multiple OpenAI gym (Brockman et al., 2016) robotic arm and hand manipulation task environments simulated by the MuJoCo physics engine (Todorov et al., 2012), including FetchReach, FetchPush, FetchPickAndPlace, FetchSlide, and HandReach. These environments are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions to transition the robotic arms and hands to target observations. We examine the effectiveness of our method by comparing it against a number of self-supervised IL schemes. The experimental results show that our method is more effective and data-efficient than the other self-supervised IL schemes for both low- and high-dimensional observation spaces, as well as in environments with high-dimensional action spaces. We also demonstrate that in most of the cases the performance of the inverse dynamics model trained by our method is comparable to that directly trained with expert demonstrations. The above observations suggest that our method is superior to the other self-supervised IL schemes even in the absence of human priors. We further evaluate our method on environments with action space perturbations, and show that our method is able to achieve satisfactory success rates. To justify each of our design decisions, we provide a comprehensive set of ablative analysis and discuss their implications. The contributions of this work are summarized as follows:
• We introduce an adversarial exploration strategy for self-supervised IL. It consists of a DRL agent and an inverse dynamics model developed for efficient exploration and data collection.
• We employ a competitive scheme for the DRL agent and the inverse dynamics model, enabling them to automatically construct a curriculum for exploration of observation space.
• We introduce a reward structure for the proposed scheme to stabilize the training process. • We demonstrate the proposed method and compare it with a number of baselines for multiple
robotic arm and hand manipulation tasks in both low- and high-dimensional state spaces. • We validate that our method is generalizable to tasks with high-dimensional action spaces.
The remainder of this paper is organized as follows. Section 2 introduces background material. Section 3 describes the proposed adversarial exploration strategy in detail. Section 4 reports the experimental results, and provides an in-depth ablative analysis of our method. Section 5 concludes.
2 BACKGROUND
In this section, we briefly review DRL, policy gradient methods, as well as inverse dynamics model.
2.1 DEEP REINFORCEMENT LEARNING AND POLICY GRADIENT METHODS
DRL trains an agent to interact with an environment E . At each timestep t, the agent receives an observation xt ∈ X , where X is the observation space of E . It then takes an action at from the action space A based on its current policy π, receives a reward r, and transitions to the next observation x′. The policy π is represented by a deep neural network with parameters θ, and is expressed as π(a|x, θ). The goal of the agent is to learn a policy to maximize the discounted sum of rewards Gt:
Gt = T∑ τ=t γτ−tr(xτ , aτ ), (1)
where t is the current timestep, γ ∈ (0, 1] the discount factor, and T the horizon. Policy gradient methods (Mnih et al., 2016; Sutton et al., 2000; Williams, 1992) are a class of RL techniques that directly optimize the parameters of a stochastic policy approximator using policy gradients. Although these methods have achieved remarkable success in a variety of domains, the high variance of gradient estimates has been a major challenge. Trust region policy optimization (TRPO) (Schulman et al., 2015) circumvented this problem by applying a trust-region constraint to the scale of policy updates. However, TRPO is a second-order algorithm, which is relatively complicated and not compatible with architectures that embrace noise or parameter sharing (Schulman et al., 2017). In this paper, we employ a more recent family of policy gradient methods, called proximal policy optimization (PPO) (Schulman et al., 2017). PPO is an approximation to TRPO, which similarly prevents large changes to the policy between updates, but requires only first-order optimization. PPO is superior in its generalizability and sample complexity while retaining the stability and reliability of TRPO 1.
2.2 INVERSE DYNAMICS MODEL
An inverse dynamics model I takes as input a pair of observations (x, x′), and predicts the action â required to reach the next observation x′ from the current observation x. It is formally expressed as:
â = I(x, x′|θI), (2)
where (x, x′) are sampled from the collected data, and θI represents the trainable parameters of I . During the training phase, θI is iteratively updated to minimize the loss function LI , expressed as:
LI(a, â|θI) = d(a, â), (3)
where d is a distance metric, and a the ground truth action. During the testing phase, a sequence of observations {x̂0, x̂1, · · · , x̂T } is first captured from an expert demonstration. A pair of observations (x̂t, x̂t+1) is then fed into I at each timestep t. Starting from x̂0, the objective of I is to predict a sequence of actions {â0, â1, · · · , âT−1} and transition the final observation x̂T as close as possible.
3 METHODOLOGY
In this section, we first describe the proposed adversarial exploration strategy. We then explain the training methodology in detail. Finally, we discuss a technique for stabilizing the training process.
3.1 ADVERSARIAL EXPLORATION STRATEGY
Fig. 1 shows a framework that illustrates the proposed adversarial exploration strategy, which includes a DRL agent P and an inverse dynamics model I . Assume that Φπ : {x0, a0, x1, a1 · · · , xT } is the
1For more details on PPO, please refer to supplementary material S.2.
sequence of observations and actions generated by P as it explores E using a policy π. At each timestep t, P collects a 3-tuple training sample (xt, at, xt+1) for I , while I predicts an action ât and generates a reward rt for P . In this work, I is modified from Eq. (2) to include an additional hidden vector ht, which recurrently encodes the information of the past observations. I is thus expressed as:
ât = I(xt, xt+1|ht, θI) ht = f(ht−1, xt),
(4)
where f(·) denotes the recurrent function. θI is iteratively updated to minimize LI , formulated as:
min θI LI(at, ât|θI) = min θI β||at − ât||2, (5)
where β is a scaling constant. We employ mean squared error β||at − ât||2 as the distance metric d(at, ât), since we only consider continuous control domains in this paper. It can be replaced with a cross-entropy loss for discrete control tasks. We directly use LI as the reward rt for P , expressed as:
rt(xt, at, xt+1) = LI(at, ât|θI) = β||at − I(xt, xt+1|ht, θI)||2. (6)
Our method targets at improving both the quality and efficiency of the data collection process performed by P , as well as the performance of I . Therefore, the goal of the proposed framework is twofold. First, P has to learn an adversarial policy πadv(at|xt) such that its cumulated discounted rewards Gt|πadv = ∑T τ=t γ
τ−trt(xτ , aτ , xτ+1) is maximized. Second, I requires to learn an optimal θI such that Eq. (6) is minimized. Minimizing LI (i.e., rt) leads to decreased Gt|πadv , forcing P to enhance πadv to explore more difficult samples to increase Gt|πadv . This implies that P is motivated to focus on I’s weak points, instead of randomly collecting ineffective training samples. Training I with hard samples not only accelerates its learning progress, but also helps to boost its performance.
3.2 TRAINING METHODOLOGY
We describe the training methodology of our adversarial exploration strategy by a pseudocode presented in Algorithm 1. Assume that P ’s policy πadv is parameterized by a set of trainable parameters θP , and is represented as πadv(at|xt, θP ). We create two buffers ZP and ZI for storing the training samples of P and I , respectively. In the beginning, ZP , ZI , E , θP , θI , πadv , as well as a timestep cumulative counter c are initialized. A number of hyperparameters are set to appropriate values, including the number of iterations Niter, the number of episodes Nepisode, the horizon T , as well as the update period TP of θP . At each timestep t, P perceives the current observation xt from E , takes an action at according to πadv(at|xt, θP ), and receives the next observation xt+1 and a termination indicator ξ (lines 9-11). ξ is set to 1 only when t equals T , otherwise it is set to 0. We then store (xt, at, xt+1, ξ) and (xt, at, xt+1) in ZP and ZI , respectively. We update θP every TP timesteps using the samples stored in ZP , as shown in (lines 13-21). At the end of each episode, we update θI with samples drawn from ZI according to the loss function LI defined in Eq. (5) (line 23).
3.3 STABILIZATION TECHNIQUE
Although adversarial exploration strategy is effective in collecting hard samples, it requires additional adjustments if P becomes too strong such that the collected samples are too difficult for I to learn. Overly difficult samples lead to a large variance in gradients derived from LI , which in turn cause a performance drop in I and instability in its learning process. We analyze this phenomenon in greater detail in Section 4.5. To tackle the issue, we propose a training technique that reshapes rt as follows:
rt := −|rt − δ|, (7)
Algorithm 1 Adversarial exploration strategy 1: Initialize ZP , ZI , E , and model parameters θP & θI 2: Initialize πadv(at|xt, θP ) 3: Initialize the timestep cumulative counter c = 0 4: SetNiter ,Nepisode, T , and TP 5: for iteration i = 1 toNiter do 6: for episode e = 1 toNepisode do 7: for timestep t = 0 to T do 8: P perceives xt from E , and predicts an action at according to πadv(at|xt, θP ) 9: xt+1 = E(xt, at) 10: ξ = 1[t == T ] 11: Store (xt, at, xt+1, ξ) in ZP 12: Store (xt, at, xt+1) in ZI 13: if (c% TP ) == 0 then 14: Initialize an empty batchB 15: Initialize a recurrent state ht 16: for (xt, at, xt+1, ξ) in ZP do 17: Evaluate ât = I(xt, xt+1|ht, θI) (calculated from Eq. (4)) 18: Evaluate rt(xt, at, xt+1) = LI(at, ât|θI) (calculated from Eq. (6)) 19: Store (xt, at, xt+1, rt) inB 20: Update θP with the gradient calculated from the samples ofB 21: Reset ZP 22: c = c+ 1 23: Update θI with the gradient calculated from the samples of ZI (according to Eq. (5)) 24: end
where δ is a pre-defined threshold value. This technique poses a restriction on the range of rt, driving P to gather moderate samples instead of overly hard ones. Note that the value of δ affects the learning speed and the final performance. We plot the impact of δ on the learning curve of I in Section 4.5. We further provide an example in our supplementary material to visualize the effect of this technique.
4 EXPERIMENTAL RESULTS
In this section, we present experimental results for a series of robotic tasks, and validate that (i) our method is effective in both low- and high-dimensional observation spaces; (ii) our method is effective in environments with high-dimensional action spaces; (iii) our method is more data efficient than the baseline methods; and (iv) our method is robust against action space perturbations. We first introduce our experimental setup. Then, we report experimental results of robotic arm and hand manipulation tasks. Finally, we present a comprehensive set of ablative analysis to validate our design decisions.
4.1 EXPERIMENTAL SETUP
We first describe the environments and tasks. Next, we explain the evaluation procedure and the method for collecting expert demonstrations. We then walk through the baselines used for comparison.
4.1.1 ENVIRONMENTS AND TASKS
We evaluate our method on a number of robotic arm and hand manipulation tasks via OpenAI gym (Brockman et al., 2016) environments simulated by the MuJoCo (Todorov et al., 2012) physics engine. We use the Fetch and Shadow Dexterous Hand (Plappert et al., 2018b) for the arm and hand manipulation tasks, respectively. For the arm manipulation tasks, which include FetchReach, FetchPush, FetchPickAndPlace, and FetchSlide, the imitator (i.e., the inverse dynamic model I) takes as inputs the positions and velocities of a gripper and a target object. It then infers the gripper’s action in 3-dimensional space to manipulate it. For the hand manipulation task HandReach, the imitator takes as inputs the positions and velocities of the fingers of a robotic hand, and determines the velocities of the joints to achieve the goal. In addition to low-dimensional observations (i.e., position, velocity, and gripper state), we further perform experiments for the above tasks using visual observations (i.e., high-dimensional observations) in the form of camera images taken from a third-person perspective. The detailed description of the above tasks is specified in Plappert et al. (2018b). For the detailed configurations of these tasks, please refer to our supplementary material.
4.1.2 EVALUATION PROCEDURE
The primary objective of our experiments is to demonstrate the efficiency of the proposed adversarial exploration strategy in collecting training data (in a self-supervised manner) for the imitator. We compare our strategy against a number of self-supervised data collection methods (referred to as ”baselines” or ”baseline methods”) described in Section 4.1.4. As different baseline methods employ different data collection strategies, the learning curve of the imitator also varies for different cases. For a fair comparison, the model architecture of the imitator and the amount of training data are fixed
for all cases. All of the experimental results are evaluated and averaged over 20 trials, corresponding to 20 different random initial seeds. In each trial, we train an imitator by the training data collected by a single self-supervised data collection method. At the beginning of each episode, the imitator receives a sequence of observations {x̂0, x̂1, · · · , x̂T } from a successful expert demonstration. At each timestep t, the imitator infers an action ât from an expert observation x̂t+1 and its current observation xt by Eq. (4). We periodically evaluate the imitator every 10K timesteps. The evaluation is performed by averaging the success rates of reaching x̂T over 500 episodes. The configuration of the imitator and the hyperparameters of the baselines are summarized in the supplementary material.
4.1.3 COLLECTION OF EXPERT DEMONSTRATIONS
For each task mentioned in Section 4.1.1, we first randomly configure task-relevant settings (e.g., goal position, initial state, etc.). We then collect demonstrations from non-trivial and successful episodes performed by a pre-trained expert agent (Andrychowicz et al., 2017). Please note that the collected demonstrations only contain sequences of observations. The implementation details of the expert agent and the method for filtering out trivial episodes are presented in our supplementary material.
4.1.4 BASELINE METHODS
We compare our proposed methodology with the following four baseline methods in our experiments.
• Random: This method collects training samples by random exploration. We consider it to be an important baseline because of its simplicity and prevalence in a number of research works on self-supervised IL (Agrawal et al., 2016; Nair et al., 2017; Pathak et al., 2018).
• Demo: This method trains the imitator directly with expert demonstrations. It serves as the performance upper bound, as the training data is the same as the testing data for this method.
• Curiosity: This method trains a DRL agent via curiosity (Pathak et al., 2017; 2018) to collect training samples. Unlike the original implementation, we replace its DRL algorithm with PPO, as training should be done on a single thread for a fair comparison with the other baselines. This is alo an important baseline due to its effectiveness in Pathak et al. (2018).
• Noise (Plappert et al., 2018a): In this method, noise is injected to the parameter space of a DRL agent to encourage exploration (Plappert et al., 2018a). Please note that its exploratory behavior relies entirely on parameter space noise, instead of using any extrinsic reward. We include this method due to its superior performance and data efficiency in many DRL tasks.
4.2 PERFORMANCE COMPARISON IN ROBOTIC ARM MANIPULATION TASKS
We compare the performance of the proposed method and the baselines on the robotic arm manipulation tasks described in Section 4.1.1. As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains. Furthermore, the imitator may not have the complete picture of the environment dynamics, increasing its difficulty to learn an inverse dynamics model. In FetchSlide, for instance, the movement of the object on the slippery surface is affected by both friction and the force exerted by the gripper. It thus motivates us to investigate whether the proposed method can help overcome the challenge. In the subsequent paragraphs, we discuss the experimental results in both low- and high-dimensional observation spaces, and plot them in Figs. 2 and 3, respectively. All of the results are obtained by following the procedure described in Section 4.1.2. The shaded regions in Figs. 2 and 3 represent the confidence intervals.
Low-dimensional observation spaces. Fig. 2 plots the learning curves for all of the methods in low-dimensional observation spaces. In all of the tasks, our method yields superior or comparable performance to the baselines except for Demo, which is trained directly with expert demonstrations. In FetchReach, it can be seen that every method achieves a success rate of 1.0. This implies that it does not require a sophisticated exploration strategy to learn an inverse dynamics model in an environment where the dynamics is relatively simple. It should be noted that although all methods reach the same final success rate, ours learns significantly faster than Demo. In contrast, in FetchPush, our method is comparable to Demo, and demonstrates superior performance to the other baselines. Our method also learns drastically faster than all the other baselines, which confirms that the proposed strategy does improve the performance and efficiency of self-supervised IL. Our method is particularly effective in tasks that require an accurate inverse dynamics model. In FetchPickAndPlace, for example, our method surpasses all the other baselines. However, all methods including Demo fail to learn a successful inverse dynamics model in FetchSlide, which suggests that it is difficult to train an imitator when the outcome of an action is not completely dependent on the action itself. It is worth noting that Curiosity loses to Random in FetchPush and FetchSlide, and Noise performs even worse than these
two methods in all of the tasks. We therefore conclude that Curiosity is not suitable for continuous control tasks, and the parameter space noise strategy cannot be directly applied to self-supervised IL. In addition to the quantitative results presented above, we further discuss the empirical results qualitatively. Please refer our supplementary material for a description of the qualitative results.
High-dimensional observation spaces. Fig. 3 plots the learning curves of all methods in highdimensional observation spaces. It can be seen that our method performs significantly better than the other baseline methods in most of the tasks, and is comparable to Demo. In FetchPickAndPlace, our method is the only one that learns a successful inverse dynamics model. Similar to the results in Fig. 2, Curiosity is no better than Random in high-dimensional observation spaces. Please note that we do not include Noise in Fig. 3 as it performs worse enough already in low-dimensional settings.
4.3 PERFORMANCE COMPARISON IN ROBOTIC HAND MANIPULATION TASK
Fig. 2 plots the learning curves for each of the methods considered. Please note that Curiosity, Noise and our method are pre-trained with 30K samples collected by random exploration, as we observe that these methods on their own suffer from large errors in an early stage during training, which prevents them from learning at all. After the first 30K samples, they are trained with data collected by their exploration strategy instead. From the results in Fig. 2, it can be seen that Demo easily stands out from the other methods as the best-performing model, surpassing them all by a considerable extent. Although our method is not as impressive as Demo, it significantly outperforms all of the other baseline methods, achieving a success rate of 0.4 while the others are still stuck at around 0.2.
The reason that the inverse dynamics models trained by the self-supervised data-collection strategies discussed in this paper (including ours and the other baselines) are not comparable to the Demo baseline in the HandReach task is primarily due to the high-dimensional action space. It is observed that the data collected by the self-supervised data-collection strategies only cover a very limited range of the state space in the HandReach environment. Therefore, the inverse dynamics models trained with these data only learn to imitate trivial poses, leading to the poor success rates presented in Fig. 2.
4.4 ROBUSTNESS TO ACTION SPACE PERTURBATION
We evaluate the performance of the imitator trained in an environment with action space perturbations to validate the robustness of our adversarial exploration strategy. In such an environment, every action taken by the DRL agent is perturbed by a Gaussian random noise, such that the training samples collected by the DRL agent are not inline with its actual intentions. Please note that we only inject noise during the training phase, as we aim to validate the robustness of the proposed data collection strategy. The scale of the injected noise is specified in the supplementary material. We report the performance change rates of various methods for different tasks in Table. 1. The performance change rate is defined as: Prperturb−ProrigProrig , where Prperturb and Prorig represent the highest success rates with and without action space perturbations, respectively. From Table. 1, it can be seen that our method retains the performance for most of the tasks, indicating that our method is robust to action space perturbations during the training phase. Please note that although Curiosity and Noise also achieve a change rate of 0% in HandReach and FetchSlide, they are not considered robust due to their poor performance in the original environment (Fig. 2). Another interesting observation is that our
method even gains some performance from action space perturbations in FetchPush and HandReach, which we leave as one of our future directions. We thus conclude that our method is robust to action space perturbations during the training phase, making it a practical option in real-world settings.
4.5 ABLATIVE ANALYSIS
In this section, we provide a set of ablative analysis. We examine the effectiveness of our method by an investigation of the training loss distribution, the stabilization technique, and the influence of δ. Please note that the value of δ is set to 1.5 by default, as described in our supplementary material.
Training loss distribution. Fig. 4 plots the probability density function (PDF) of LI (derived from Eq. (5)) by kernel density estimation (KDE) for the first 2K training batches during the training phase. The vertical axis corresponds to the probability density, while the horizontal axis represents the scale of LI . The curves Ours (w stab) and Ours (w/o stab) represent the cases where the stabilization technique described in Section 3.3 is employed or not, respectively. We additionally plot the curve Random in Fig. 4 to highlight the effectiveness of our method. It can be observed that both Ours (w stab) and Ours (w/o stab) concentrate on notably higher loss values than Random. This observation implies that adversarial exploration strategy does explore hard samples for inverse dynamics model.
Validation of the stabilization technique. We validate the proposed stabilization technique in terms of the PDF of LI and the learning curve of the imitator, and plot the results in Figs. 4 and 5, respectively. From Fig. 4, it can be observed that the modes of Ours (w stab) are lower than those of Ours (w/o stab) in most cases, implying that the stabilization technique indeed motivates the DRL agents to favor those moderately hard samples. We also observe that for each of the five cases, the mode of Ours (w stab) is close to the value of δ (plotted in a dotted line), indicating that our reward structure presented in Eq. (7) does help to regulate LI (and thus rt) to be around δ. To further demonstrate the effectiveness of the stabilization technique, we compare the learning curves of Ours (w stab) and Ours (w/o stab) in Fig. 5. It is observed that for the initial 10K samples of the five cases, the success rates of Ours (w/o stab) are comparable to those of Ours (w stab). However, their performance degrade drastically during the rest of the training phase. This observation confirms that the stabilization technique does contribute significantly to our adversarial exploration strategy.
Although most of the DRL works suggest that the rewards should be re-scaled or clipped within a range (e.g., from -1 to 1), the unbounded rewards do not introduce any issues during the training process of our experiments. The empirical rationale is that the rewards received by the DRL agent are regulated by Eq. (7) to be around δ, as described in Section 4.5 and depicted in Fig. 4. Without the stabilization technique, however, the learning curves of the inverse dynamics model degrade drastically (as illustrated in Fig. 2), even if the reward clipping technique is applied.
Influence of δ. Fig. 6 compares the learning curves of the imitator for different values of δ. For instance, Ours(0.1) corresponds to δ = 0.1. It is observed that for most of the tasks, the success rates drop when δ is set to an overly high or low value (e.g., 100.0 or 0.0), suggesting that a moderate value of δ is necessary for the stabilization technique. The value of δ can be adjusted dynamically by the adaptive scaling technique presented in Plappert et al. (2018a), which is left as our future direction.
From the analysis presented above, we conclude that the proposed adversarial exploration strategy is effective in collecting difficult training data for the imitator. The analysis also validates that our
stabilization technique indeed leads to superior performance, and is capable of guiding the DRL agent to collect moderately hard samples. This enables the imitator to pursue a stable learning curve.
5 CONCLUSION
In this paper, we presented an adversarial exploration strategy, which consists of a DRL agent and an inverse dynamics model competing with each other for self-supervised IL. The former is encouraged to adversarially collect difficult training data for the latter, such that the training efficiency of the latter is significantly enhanced. Experimental results demonstrated that our method substantially improved the data collection efficiency in multiple robotic arm and hand manipulation tasks, and boosted the performance of the inverse dynamics model in both low- and high-dimensional observation spaces. In addition, we validated that our method is generalizable to environments with high-dimensional action spaces. Moreover, we showed that our method is robust to action space perturbations. Finally, we provided a set of ablative analysis to validate the effectiveness for each of our design decisions. | 1. What is the main contribution of the paper regarding self-supervised imitation learning?
2. What are the strengths and weaknesses of the proposed exploration strategy?
3. How does the reviewer assess the clarity and quality of the writing?
4. What are the concerns regarding the experimental evaluation?
5. How does the proposed method compare to other methods in terms of performance and limitations? | Review | Review
The paper proposes a novel exploration strategy for self-supervised imitation learning. An inverse dynamics model is trained on the trajectories collected from a RL-trained policy. The policy is rewarded for generating trajectories on which the inverse dynamics model (IDM) currently works poorly, i.e. on which IDM predicts actions that are far (in terms of mean square error) from the actions performed by the policy. This adversarial training is performed in purely self-supervised way. The evaluation is performed by one-shot imitation of an expert trajectory using the IDM: the action is predicted from the current state of the environment and the next state in the expert’s trajectory. Experimental evaluation shows that the proposed method is superior to baseline exploration strategies for self-supervised imitation learning, including random and curiosity-based exploration.
Overall, I find the idea quite appealing. I am not an expert in the domain and can not make comments on the novelty of the approach. I found the writing mostly clear, except for the following issues:
- the introduction has not made it crystal clear that the considered paradigm is different from e.g. DAGGER and GAIL in that expert demonstrations are used at the inference time. A much wider audience is familiar with the former methods, and this distinction should have be explained more clearly.
- Section 4.2.: “As opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains.” - this sentence did not make sense to me. It basically says continuous control is challenging because it is challenging.
- I did not understand the stabilization approach. How exactly Equation (7) forces the policy to produce “not too hard” training examples for IDM? Figure 4 shows that it is on the opposite examples with small L_I that are avoided by using \delta > 0.
- Table 1 - it is a bit counterintuitive that negative numbers are better than positive numbers here. Perhaps instead of policy’s deterioration you could report the relative change, negative when the performance goes down and positive otherwise?
I do have concerns regarding the experimental evaluation:
- the “Demos” baseline approach should be explained in the main text! In Appendix S.7 I see that 1000 human demonstrations were used for training. Why 1000, and not 100 and not 10000? How would the results change? This needs to be discussed. Without discussing this it is really unclear how the proposed method can outperform “Demos”, which it does pretty often.
- it is commendable that 20 repetitions of each experiment were performed, but I am not sure if it is ever explained in the paper what exactly the upper and lower boundaries in the figures mean. Is it the standard deviation? A confidence interval? Can you comment on the variance of the proposed approach, which seems to be very high, especially when I am looking at high-dimensional fetch-reach results?
- the results of “HandReach” experiments, where the proposed method works much worse than “Demos” are not discussed in the text at all
- overall, there is no example of the proposed method making a difference between a “working” and “non-working” system, compared to “Curiosity” and “Random”. I am wondering if improvements from 40% to 60% in such cases are really important. In 7 out of 9 plots the performance of the proposed method is less than 80% - not very impressive. "Demos" baseline doesn't perform much better, but what would happen with 10000 demonstrations?
- there is no comparison to behavioral cloning, GAIL, IRL. Would these methods perform better than learning IDM like "Demos" does?
I think that currently the paper is slightly below the threshold, due to evaluation issues discussed above and overall low performance of the proposed algorithm. I am willing to reconsider my decision if these issues are addressed. |
ICLR | Title
Mastering Spatial Graph Prediction of Road Networks
Abstract
Accurately predicting road networks from satellite images requires a global understanding of the network topology. We propose to capture such high-level information by introducing a graph-based framework that simulates the addition of sequences of graph edges using a reinforcement learning (RL) approach. In particular, given a partially generated graph associated with a satellite image, an RL agent nominates modifications that maximize a cumulative reward. As opposed to standard supervised techniques that tend to be more restricted to commonly used surrogate losses, these rewards can be based on various complex, potentially noncontinuous, metrics of interest. This yields more power and flexibility to encode problem-dependent knowledge. Empirical results on several benchmark datasets demonstrate enhanced performance and increased high-level reasoning about the graph topology when using a tree-based search. We further highlight the superiority of our approach under substantial occlusions by introducing a new synthetic benchmark dataset for this task.
1 INTRODUCTION
Road layout modelling from satellite images constitutes an important task of remote sensing, with numerous applications in and navigation. The vast amounts of data available from the commercialization of geospatial data, in addition to the need for accurately establishing the connectivity of roads in remote areas, have led to an increased interest in the precise representation of existing road networks. By nature, these applications require structured data types that provide efficient representations to encode geometry, in this case, graphs, a de facto choice in domains such as computer graphics, virtual reality, gaming, and the film industry. These structured-graph representations are also commonly used to label recent road network datasets (Van Etten et al., 2018) and map repositories (OpenStreetMap contributors, 2017). Based on these observations, we propose a new method for generating predictions directly as spatial graphs, allowing us to explicitly incorporate geometric constraints in the learning process, encouraging predictions that better capture higher-level dataset statistics.
In contrast, existing methods for road layout detection, mostly rely on pixel-based segmentation models that are trained on masks produced by rasterizing ground truth graphs. Performing pixelwise segmentation, though, ignores structural features and geometric constraints inherent to the
problem. As a result, minimum differences in the pixel-level output domain can have significant consequences in the proposed graph, in terms of connectivity and path distances, as manifested by the often fragmented outputs obtained after running inference on these models. In order to address these significant drawbacks, we propose a new paradigm where we: (i) directly generate outputs as spatial graphs and (ii) formalize the problem as a game where we sequentially construct the output by adding edges between key points. These key points can in principle come from any off-the-shelf detector that identifies road pieces with sufficient accuracy. Our generation process avoids having to resort to cumbersome post-processing steps (Batra et al., 2019; Montoya-Zegarra et al., 2015) or optimize some surrogate objectives (Máttyus & Urtasun, 2018; Mosinska et al., 2018) whose relation to the desired qualities of the final prediction is disputed. Concurrently, the sequential decision-making strategy we propose enables us to focus interactively on different parts of the image, introducing the notion of a current state and producing reward estimates for a succession of actions. In essence, our method can be considered as a generalization of previous refinement techniques (Batra et al., 2019; Li et al., 2019b) with three major advantages: (i) removal of the requirement for greedy decoding, (ii) ability to attend globally to the current prediction and selectively target parts of the image, and (iii) capacity to train based on demanding task-specific metrics.
More precisely, our contributions are the following:
• We propose a novel generic strategy for training and inference in autoregressive models that removes the requirement of decoding according to a pre-defined order and refines initial sampling probabilities via a tree search.
• We create a new synthetic benchmark dataset of pixel-level accurate labels of overhead satellite images for the task of road network extraction. This gives us the ability to simulate complex scenarios with occluded regions, allowing us to demonstrate the improved robustness of our approach. We plan to release this dataset publicly.
• We confirm the wide applicability of our approach by improving the performance of existing methods on the popular SpaceNet and DeepGlobe datasets.
2 RELATED WORK
Initial attempts to extract road networks mainly revolved around handcrafted features and stochastic geometric models of roads (Barzohar & Cooper, 1996). Road layouts have specific characteristics, regarding radiometry and topology e.g. particular junction distribution, certain general orientation, and curvature (see Fig. 2), that enable their detection even in cases with significant occlusion and uncertainty (Hinz & Baumgartner, 2003). Modern approaches mostly formulate the road extraction task as a segmentation prediction task (Lian et al., 2020; Mattyus et al., 2015; Audebert et al., 2017) by applying models such as Hourglass (Newell et al., 2016) or LinkNet (Chaurasia & Culurciello, 2017). This interpretation has significant drawbacks when evaluated against structural losses, because of discontinuities in the predicted masks. Such shortcomings have been addressed by applying some additional post-processing steps, such as high-order conditional random fields (Niemeyer et al., 2011; Wegner et al., 2013) or by training additional models that refine these initial predictions (Máttyus et al., 2017; Batra et al., 2019). Other common techniques include the optimization of an ensemble of losses. Chu et al. (2019) rely on a directional loss and use non-maximal suppression as a thinning layer, while Batra et al. (2019) calculate orientations of road segments. Although such auxiliary losses somewhat improve the output consistency, the fundamental issue of producing
predictions in the pixel space persists. It remains impossible to overcome naturally occurring road network structures, e.g. crossings of roads in different elevations, see Fig. 3.
Previous failure cases have led to more intuitive conceptualizations of the task. Roadtracer (Bastani et al., 2018), iteratively builds a road network, similar to a depth-first search approach, while Chu et al. (2019) learn a generative model for road layouts and then apply it as a prior on top of a segmentation prediction mask. Proposed graph-based approaches, encode the road network directly as a graph, but either operate based on a constrained step-size (Tan et al., 2020) to generate new vertices or operate on a single step (He et al., 2020; Bandara et al., 2022), involving use-defined thresholding to post-process the final predictions. Most similar to our work, Li et al. (2019b) predict locations of key points and define a specific order traversing them, also similarly Xu et al. (2022). Such autoregressive models have been recently successfully applied with the use of transformers (Vaswani et al., 2017) in a range of applications (Nash et al., 2020; Para et al., 2021a;b; Xu et al., 2022) to model constraints between elements, while their supervised training explicitly requires tokens to be processed in a specific order. This specific order combined with the fact that only a surrogate training objective is used, introduces limitations, discussed further in the next section. In order to eliminate this order requirement and to optimize based on the desired metric, while attending globally to the currently generated graph, we propose to use RL as a suitable alternative.
When generating discrete outputs, an unordered set of edges (Zaheer et al., 2017), it is challenging to adapt existing learning frameworks to train generative models (Para et al., 2021b). Instead of optimizing in the image space, however, we are interested in optimizing spatial structured losses by learning program heuristics, i.e. policies. RL has found success in the past in computer vision applications (Le et al., 2021), but mainly as an auxiliary unit with the goal of improving efficiency (Xu et al., 2021) or as a fine-tuning step (Qin et al., 2018). We instead rely on RL to produce the entire graph exploiting the ability of the framework for more high-level reasoning.
3 METHODOLOGY
We parametrize a road network as a graph G = {V, E} with each vertex vi = [xi, yi]⊤ ∈ V representing a key point on the road surface. The set of edges (vi, vj) ∈ E , corresponds to road segments connecting these key points. We can then generate a probability distribution over roads by following a two-step process: i) generation of a set of vertices and ii) generation of a set of edges connecting them. Formally, for an image I, a road network R is derived as:
R = argmax V,E P (V, E | I) = P (E | V, I)P (V | I). (1)
The graph nodes typically correspond to local information in an image, and we therefore resort to a CNN-based model to extract key points, providing the set V ′, that sufficiently captures the information in the ground truth graph G. The construction of edges, however, requires higher-level reasoning that can cope with parallel roads, junctions, occlusions, or poor image resolution, among other difficulties.
Considering probabilistic models over sequences and using the chain rule, we can factorize the joint distribution as the product of a series of conditional distributions
P (E | V, I;σ) = NE∏ n=1 P (eσ(n) | e<σ(n),V, I), (2)
where e<σ(n) represents eσ(1), eσ(2), . . . , eσ(n−1) and σ ∈ SNE denotes the set of all permutations of the integers 1, 2, . . . , NE , with NE the number of edges. For our work, we consider the setting where these sequences are upper bounded in length, i.e. NE ≤ Nmax, a reasonable assumption when dealing with satellite images of fixed size. Autoregressive models (ARMs) have been used to solve similar tasks in the past by defining a fixed order of decoding (Oord et al., 2016; van den Oord et al., 2016; Nash et al., 2020; Para et al., 2021a). In our case, this would correspond to sorting all key points by their x and y locations and generating edges for each of them consecutively. We call this the autoregressive order. There are, however, two major drawbacks.
First, the evaluation metrics used for this task define a buffer region in which nodes in the ground truth and the predicted graph are considered to be a match. Therefore, a newly generated edge can be only partially correct, when only partially overlapping with the ground truth graph. This nonsmooth feedback comes in clear contrast to the supervised training scheme of ARMs, minimization of the negative log-likelihood, that assumes perfect information regarding the key points’ locations, i.e. that the sets V and V ′ are the same. In practice, this condition is rarely met, as the exact spatial graph can be represented in arbitrarily many ways by subdividing long edges into smaller ones or due to small perturbation to key points’ locations. It is thus imperative that our model can estimate the expected improvement of adding selected edges, which implicitly can also signal when to appropriately end the generation process.
Second, the requirement to decode according to the autoregressive order introduces a bias and limits the expressiveness of the model (Uria et al., 2014). As a result, it can lead to failures in cases with blurry inputs or occlusions (Li et al., 2019b). Previous solutions include the use of beam search, either deterministic or stochastic (Meister et al., 2021). Beam search does not however eliminate the bias introduced in the selection order of the key points, while suffering from other deficiencies, such as degenerate repetitions (Holtzman et al., 2019; Fan et al., 2018). In order to address these shortcomings, we advocate for a permutation invariant strategy. We present a novel generic strategy, which improves autoregressive models without requiring significantly more computational cost.
3.1 AUTOREGRESSIVE MODEL
We start by introducing a base autoregressive model, illustrated in Fig. 4. Given an image and a set of key points, our model produces a graph by sequentially predicting a list of indices, corresponding to the graph’s flattened, unweighted edge-list. Each forward pass produces probabilities over the set of key points, which leads to a new action after sampling. A successive pair of indices defines an edge as its two endpoints. A special end-of-sequence token is reserved to designate the end of the generation process.
Following Wang et al. (2018); Smith et al. (2019), we begin by extracting visual features per key point, by interpolating intermediate layers of a ResNet backbone to the key points’ locations, which
are further augmented by position encodings of their locations. We then further process these features using two lightweight Transformer modules. The first transformer (Transformer I in Fig. 4) encodes the features of the key points as embeddings. The second transformer (Transformer II in Fig. 4) takes as input the currently generated edge list sequence, corresponding to the currently partially generated graph. Edges are directly mapped to the embeddings of their comprising key points, supplemented by position and type embeddings, to differentiate between them, as shown in Fig. 5 (a). An additional global image embedding, also extracted by the ResNet, is used to initialize the sequence. The Transformer II module produces a single hidden state, which is linked with theNV′ +1 (corresponding to the provided key points, supplemented by the special end of the generation token) key points’ embeddings by a pointer network (Vinyals et al., 2015), via a dot-product to generate the final distribution. This allows a variable number of actions that depends on the current environment state, instead of using a fixed action space.
3.2 AUGMENTED SEARCH
In order to address the problems of greedy decoding (analysed Section 3), we frame our road extraction task as a classical Markov-decision process (MDP). The generation of a graph for every image defines an environment, where the length of the currently generated edge list determines the current step. Let ot, αt and rt correspond to the observation, the action and the observed reward respectively, at time step t. The aim is to search for a policy that maximizes the expected cumulative reward over a horizon T , i.e., maxπ J(π) := Eπ[ ∑T−1 t=0 γ
trt] where γ ∈ (0, 1] indicates the discount factor and the expectation is with respect to the randomness in the policy and the transition dynamics. We set the discount factor to 1 due to the assumed bounded time horizon, and we note that although the dynamics of the environment are deterministic, optimizing the reward remains challenging.
Each action leads to the selection of a new key point, with new edges being added once every two actions. The addition of a new edge leads to a revision of the predicted graph and triggers an intermediate reward
rt = sc(Ggt,Gpredt)− sc(Ggt,Gpredt−1), (3)
where sc(Ggt,Gpredt) is a similarity score between the ground truth graph Ggt and the current estimate Gpredt . Discussion of the specific similarity scores used in practice is postponed for Section 3.3. A proper spatial graph generation entails (i) correct topology and (ii) accurate location prediction of individual roads. For the latter, intermediate vertices of degree 2 are essential. We call a road segment (RS), an ordered collection of edges, between vertices of degree d(.) two (or a collection of edges forming a circle):
RS = {(vrs1 ,vrs2), . . . , (vrsk−1 ,vrsk)} s.t (vrsi ,vrsi+1) ∈ E for i = 1, . . . , k − 1 d(vrsi) = 2, for i = 2, . . . k − 1, (d(vrs1) ̸= 2 and d(vrsk) ̸= 2 or vrs1 = vrsk).
During the progression of an episode (i.e. the sequential generation of a graph), the topological nature of the similarity scores in Eq. 3 implies that the effect of each new edge to the reward will be reflected mostly once its whole corresponding road segment has been generated. To resolve the ambiguity in the credit assignment and allow our agent to look ahead into sequences of actions, we rely on Monte Carlo Tree Search (MCTS) to simulate entire sequences of actions. We use a state-of-the-art search-based agent, MuZero (Schrittwieser et al., 2020), that constructs a learnable model of the environment dynamics, simulating transitions in this latent representation and leading to significant computational benefits.
Specifically, MuZero requires three distinct parts (see also Fig. 5):
1. A representation function f that creates a latent vector of the current state ht = fθ(ot). For this step, we use the autoregressive model, as shown in Fig. 4. Our current latent representation ht contains the graph’s hidden state, along with the key points’ embeddings used to map actions to latent vectors. As key points remain the same throughout the episode, image-based features (Components (1) and (2) in Fig. 4) are only computed once.
2. A dynamics network g, we use a simple LSTM (Hochreiter & Schmidhuber, 1997), that predicts the effect of a new action by predicting the next hidden state and the expected reward: (ĥt, r̂t) = gθ(h̃t−1, αt). We can replace h̃t−1 with the latent representation ht−1, or its previous computed approximation ĥt−1 for tree search of larger depth larger than 1.
3. A prediction network ψ, that estimates the policy and the value for the current state (pt+1, vt) = ψθ(h̃t). We compute the policy via a pointer network, as described in Section 3.1. Value estimates are produced by a simple multi-layer network.
The dynamics network guides the search and evaluates the expected reward of actions. For every newly generated edge, we also explicitly inform the network regarding the creation of new intersections and the expected relative change in the overall road surface generated via embeddings (see Fig. 5). By using the dynamics network, we bypass the expensive call to the decoder module during the search, and can instead approximate small modifications in the latent representation directly. For our experiments, the dynamics network requires up to 90 times less floating-point operations to simulate trajectories, compared to using the edge embeddings’ decoder. Effectively, our method does not involve significantly more computation budget compared to the base autoregressive model.
3.3 EVALUATION METRICS
We adopt the same evaluation metrics both as a comparison between different methods but also as the incremental rewards for our agent, by Eq. 3. We use the relaxed versions of precision, recall and intersection over union for pixel-level predictions Correctness/Completeness/Quality (CCQ) (Wiedemann et al., 1998; Wang et al., 2016). As graph-theoretic metrics we use APLS (Van Etten et al., 2018) and additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. More details can be found in Appendix E.1
4 EXPERIMENTS
Implementation details We resize images to 300 × 300 pixels, standardizing according to the training set statistics. For exploration, we initialize workers using Ray (Moritz et al., 2017) that execute episodes in the environment. For training, we unroll the dynamics function for td = 5 steps and use priority weights for the episode according to the differences between predicted and target values. Our algorithm can be considered as an approximate on-policy TD(λ) (Sutton & Barto, 2018) due to the relatively small replay buffer. We reanalyse older games (Schrittwieser et al., 2020) to provide fresher target estimates. Unvisited graph nodes are selected based on an upper confidence score, balancing exploration with exploitation, similar to Silver et al. (2018). We add exploration noise as Dirichlet noise and select actions based on a temperature-controlled sampling procedure, whose temperature is reduced during training.
Given the limited high-quality available ground truth labels (Singh et al., 2018) and to accelerate training, we employ modifications introduced in EfficientZero (Ye et al., 2021). We investigate adding supervision to the environment model and better initialize Q-value predictions similar to the implementation of Elf OpenGo (Tian et al., 2019). We further scale values and rewards using an
invertible transform inspired by Pohlen et al. (2018). Here, we predict support, as fully connected networks are biased towards learning low-frequency representations (Jacot et al., 2018). Selecting new actions involves generating simulations that can be done expeditiously given the small dimension of the latent space and the modest size of the dynamics network. Finally, to generate key points, we skeletonize segmentation masks provided by any baseline segmentation model, by thresholding the respective segmentation masks produced and applying RDP-simplification (Douglas & Peucker, 1973; Ramer, 1972). Selecting an appropriate threshold and subdividing larger edges guarantees that the generated set V ′ adequately captures most of the ground truth road network, leaving the complexity of the problem for our model to handle.
4.1 SYNTHETIC DATASET
We generate a dataset of overhead satellite images of a synthetic town using CityEngine1. We randomly specify vegetation of varying height and width along the side walks of the generated streets, leading inadvertently to occlusions of varying difficulty. The simulated environment allows specifying pixel-perfect masks regarding both roads and trees occluding the road surface based on the provided camera parameters (Kong et al., 2020). We can hence tune the complexity of the task and quantify the benefits of our approach for varying levels of difficulty. We defer more details regarding the generation process and dataset examples to the supplementary material.
We compare our method by training on our dataset a LinkNet model (Chaurasia & Culurciello, 2017), a popular segmentation model that has been widely used in the remote sensing community (Li et al., 2019a). Even in this synthetic and thus less diverse scenario, the deficiency of segmentation models to rely mostly on local information, with no explicit ability for longer-range interactions, is evident. Fig. 6, illustrates examples of such over-segmented predictions and how our approach can improve on them. We also define a ’difficulty’ attribute per synthetic satellite image, quantifying the occlusions as a percentage of the ground truth road mask covered. We observe a considerable absolute improvement in topological metric scores when training our model on this synthetic dataset, compared to the LinkNet baseline, for varying image difficulty.
4.2 REAL DATASETS
We evaluate our method on the SpaceNet and DeepGlobe datasets. We use the same train-test splits as in Batra et al. (2019) to promote reproducibility, while results are reported for the final combined graph on the original image scale. No pre-training on the synthetic dataset takes place. Further details regarding pre-processing are available in the Appendix E.2
1https://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
4.2.1 COMPARISON TO BASELINES
We first verify that under the effectiveness of the proposed approach under an ideal scenario where the key points conditioned upon, correspond to the ones from the ground truth. In the interest of space we point the reader to Appendix A and Table 3. Subsequently, we move to the primary task of predicting spatial graphs without the ground-truth graph information but extract key points via the aforementioned process and train using the described topological metrics directly. The previous baselines are not applicable in this case, due to lack of ground truth information, so we instead compare against the following; we explore powerful CNN architectures, by training a Segmentation model with a ResNet backbone. We evaluate DeepRoadMapper (Máttyus et al., 2017), a model that refines previous segmentation by adding connections along possible identified paths. As done by Batra et al. (2019) we notice that in complex scenarios, the effect of this post-processing step is not always positive. We also evaluate against LinkNet (Chaurasia & Culurciello, 2017), and Orientation (Batra et al., 2019), which is trained to predict simultaneously road surfaces and orientation.
Quantitative results in Table 1 and visual inspection in Table 2, affirm that the global context and the gradual generation incite a better understanding of the scene, leading to consistently outperforming topological metric results compared to the baselines. We remark that our predictions are more topologically consistent with fewer shortcomings, such as double roads, fragmented roads, and overconnections. This is further supplemented by comparing the statistics of the predicted spatial graphs in Fig. 7. We further showcase the transferability of our model by employing it with no fine-tuning (apart from dataset-specific image normalization) on the DeepGlobe dataset. We can refine previous predictions by adding missing edges, leading to more accurate spatial graph predictions, as shown in Table 1. This confirms our conjecture that road structures and geometric patterns are repeated across diverse cities’ locations.
4.2.2 ABLATION STUDY
We experimented attending to image features for the two transformer modules by extracting perpatch visual features from the conditioning image H img = [himg1 ,h img 2 , . . . ], as done in the Vision Transformer (Dosovitskiy et al., 2020). This did not lead to significant improvements, which we attribute to over-fitting. In Fig. 8 we highlight the relative importance of some additional components for the final predictions. As efficiency is also of particular importance to us, we further visualize the effect of varying the simulation depth of the dynamics network during training. Surprisingly
perhaps, our method performs consistently better than baselines, even for a small overall simulation length, as this already enables better policy approximations.
In Appendix A we provide incremental results for the task of predicting road networks based on a optimal set of key points. In Appendix B we provide insights concerning interpretability and further comparison to baselines based on the varying difficulty of the predicted underlying road networks. In Appendix C we give more information regarding the generation of the synthetic dataset, while in Appendix D more information regarding the model architecture. Final in Appendix E we provide more implementation decisions, including details on exactly how key points and generated and how individual patch-level predictions are fused together. More examples of full environment trajectories are given in Appendix F. We stress that our method can act on partially initialized predictions, registering it also as a practical refinement approach on top of any baseline. Initializing our model according to the ARM model allows a moderately quick fine-tuning phrase. In combination with the learned environment model, which circumvents expensive calls to the edge embedding model for each simulation step in the MCTS, allows us to train even on a single GPU.
5 CONCLUSIONS
We presented a novel reinforcement learning framework for generating a graph as a variable-length edge sequence, where a structured-aware decoder selects new edges by simulating action sequences into the future. Importantly, this allows the model to better capture the geometry of the targeted objects. This is confirmed by our experimental results, since our approach consistently produces more faithful graph statistics. One advantage of the proposed method is that the reward function is based on (non-continuous) metrics that are directly connected to the application in question. Our approach does not require significantly more computational resources compared to state-of-the-art supervised approaches, while in addition, it can be used to refine predictions from another given model. We also remark that the direct prediction of a graph enables the concurrent prediction of meta-information about the edges, including, for instance, the type of road (highway, primary or secondary street, biking lane, etc).
Our approach opens the door to several directions for future work. For example, we have assumed that a pre-defined model gives the location of key points, but one could instead augment the action space to propose new key points’ locations. Other promising directions include the direct prediction of input-dependent graph primitives, e.g. T-junctions or roundabouts. Finally, we emphasize that our approach is suitable to a wide variety of applications where autoregressive models are typically used, and it is of special interest when there is a need for complex interactions or constraints between different parts of an object.
6 REPRODUCIBILITY STATEMENT
We have taken multiple steps to ensure reproducibility of the experiments. We refer the reader to Appendix E for a complete description of the training protocol. We have also released the code as part of the supplementary material, including scripts on how to reproduce our results.
A MORE EXPERIMENTS
We first assess the performance of our proposed method in an ideal scenario where the key points, correspond to the ones from the ground truth. To hinder training and inference, we insert additional key points as (1) random intermediate points between known edges and (2) randomly sampled locations in the images. Here, our assumption in Section 3 that the set V ′ suffices to generate the ground truth graph, holds by construction. We compare our method against several baselines that learn to connect edges between key points, using the same feature extraction pipeline, described in Section 3.1, as our model. Cls is a classification network that predicts for all pairs of key points a value {0, 1} corresponding to the existence of an edge. GCN implements a graph neural network that predicts directly the adjacency matrix. We also present an autoregressive version of our model ARM, that is trained with cross-entropy loss to predict the pre-defined ordered sequence of key points. We use this model to initialize ours. Results are presented in Table 3.
As expected, the ARM model achieves a low perplexity score when evaluated against the corresponding sequence, ordered according to the autoregressive order, but suffers in predicting the edges when in random order. The ARM underperforms because of frequent early terminations and the implicit inability to revisit key points, what the desired final metric is concerned, here APLS. Even though our model is developed upon this autoregressive model, it generates tokens in an arbitrary arrangement. Reward and value estimates enable a different training scheme that deeply correlates with the desired objective.
B INTERPRETABILITY
We visualize attention (of the Transformer II module), using the attention flow proposed in Abnar & Zuidema (2020), in Fig. 9. To create attention scores per edge, we aggregate scores for the pair of tokens that define each edge. New predictions lay increased attention to already generated junctions, parallel road segments, and other edges belonging to the same road segment.
We also compare APLS results achieved by varying the difficulty of the ground truth images in terms of the total number of junctions (vertices with a degree greater than 2) and in terms of the average length of road segments that are present, in Fig. 10. Our method explicitly captures information re-
garding the degree of the key points during the search, while it can encode better global information, even across larger distances. It is not a surprise perhaps then, that it outperforms the baselines more convincingly as the difficulty of the ground truth road network increases.
Finally, we visualize an example of an imagined rollout trajectory at a single step of our algorithm in Fig. 11. During a single inference step, our method uses tree search to look ahead into sequences of actions in the future. For our example, we have chosen a relatively smaller number of simulations (10) for better visual inspection. We also show the corresponding environment states reached, which are, however, not explicitly available to the model, as it is searching and planning using a learned model of the environment.
C DATASET CREATION
We use CityEngine, a 3D modelling software for creating immersive urban environments. We generate a simple road network and apply a rural city texture on the created city blocks, provided by Kong et al. (2020). We then uniformly generate trees of varying height and size along the side walks of the generated streets. We then iteratively scan the generated city by passing a camera of specific orientation and height. We repeat the same process after suitable modifications to the texture, for the generation of the street masks, as well as the vegetation masks, that correspond to only the plants along the side walks. Some examples of the generated images are provided in Fig. 12. We note that additional occlusion can be caused by the relation of the camera with the 3D meshes corresponding to buildings. These occlusions are, however, not captured by our generated masks, and we can expect them to contribute partially to the fragmented segmentation results.
We train a segmentation-based model, LinkNet, as our baseline. We rasterize the ground truth graph to create pixel-level labels and train by maximizing the intersection over union, which is commonly done in practice. We note that there is a tradeoff between the nature of the predictions and the choice of the line-width with which the ground truth graph is rasterized. A large width achieves better results in terms of connectivity of the predicted graph but results in poorer accuracy in the final key points’ locations. Furthermore, when providing a large width, areas in the image with more uncertainty, e.g. vegetation that is not above a road segment, are also predicted as road networks with high certainty, leading to spurious, disconnected road segments. To highlight the advantages of our method compared to this baseline and in order to promote more meaningful predictions, we select a relatively smaller width.
D ARCHITECTURE DETAILS
As an image backbone model, we use a ResNet-18 for the synthetic dataset and a ResNet-50 for the real dataset experiments. We extract features at four different scales, after each of the 4 ResNet layers. To extract features for each key point, we interpolate the backbone feature maps based on the key points’ locations. We use different learned embeddings based on the actual key points’ locations. For the key points embedding model, we use a transformer encoder with 16 self-attention layers and a dropout rate of 0.15. We use layer normalization and GELU activation functions.
For the edge-embeddings model, we use the respective key points embedding, along with learned position and type embeddings, which we all sum together. As aforementioned, we can initialize the current edge sequence based on previous predictions, allowing our model to refine any initial prediction provided. Again, we use the same transformer architecture with 16 self-attention layers, and a dropout rate of 0.15.
Finally, the architecture of the dynamics network and the value prediction network are shown in Fig. 13. For the value estimation, we also provide the current environment step, as we execute steps in an environment with a bounded time horizon.
E IMPLEMENTATION DETAILS
E.1 EVALUATION METRICS
APLS (Van Etten et al., 2018) constitutes a graph theoretic metric that faithfully describes routing properties. APLS is defined as
APLS = 1− 1 Np ∑ pv1v2<∞ min { 1, |pv1v2 − pv1′v′2 | pv1v2 } , (4)
where v and v′ denote a source node and its closest point on the predicted graph if such exists within a buffer. Np denotes the number of paths sampled and pv1v2 the length of the shortest path between two nodes. Similarly, the Too Long Too Short (TLTS) metric (Wegner et al., 2013) compares lengths of the shortest paths between randomly chosen points of the two graphs, classifying them as infeasible, correct, or too-long or too-short (2l+2s) if the length of the path on the predicted graph does not differ by more than a threshold (5%) compared to the ground truth path. Since small perturbations to the predicted graph can have larger implications to pixel-level predictions, the definitions of precision, recall and intersection over union were relaxed in Wiedemann et al. (1998); Wang et al. (2016) leading to the metrics Correctness/Completeness/Quality (CCQ).
Still, some types of errors, such as double roads or over-connections, are not penalized from the above metrics (Citraro et al., 2020). We therefore additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. For the final similarity score used in Eq. 3, we use a linear combination of the aforementioned metrics, more details are available in the supplementary material.
E.2 DATASET INFORMATION
We use the following datasets to train our models, i.e. baselines and our newly proposed RL agent.
SpaceNet (Van Etten et al., 2018) includes a road network of over 8000 Km over four different cities: Vegas, Paris, Shanghai, and Khartoum, where the complexity and quality, and regularity of the road network depend on the city of origin. Satellite images are provided at a pixel resolution of 1300 × 1300, corresponding to a ground resolution of 30cm per pixel. We split the 2780 total images into crops of size 400×400 with an overlap of 100 pixels for training. To better highlight the diversity of the satellite images from these four different locations, we have included some randomly sampled examples in Fig. 14.
DeepGlobe (Demir et al., 2018) contains satellite images from 3 different locations with pixellevel annotations. Images have a resolution of 1024 × 1024, with a ground resolution of 50cm per pixel. We crop the 6226 images into tiles, leading to a similar ground truth resolution per pixel compared to SpaceNet.
E.3 TRAINING DETAILS
At each MCTS search step, we perform several simulations from the root state s0 for a number of steps k = 1, . . . and select an action that maximizes the upper confidence bound (Silver et al., 2018),
ak = argmax a
[ Q(s, a) + P (s, a) √∑ b N(s, b)
1 +N(s, a)
( c1 + log (∑ b N(s, b) + c2 + 1
c2
))] ,
where N(s, a), Q(s, a), P (s, a) corresponds to the visit counts, mean values and policies, as calculated by the current search statistics. Constants c1, c2 balance exploration and exploitation. Based on a state sk−1 and a selected action ak, a new state sk and reward r̂k are estimated through the dynamics network. We update the mean values based on bootstrapped values of the estimated value functions and rewards. We experimented with training the reward and value support predictions with both mean squared error (MSE) and cross-entropy loss. We opted for MSE because of its stability. For a more in depth description of the training scheme of MuZero we recommend Schrittwieser et al. (2020) and Ye et al. (2021).
As hinted in the main text, we train using intermediate rewards, a linear combination of topological metrics. We experimented using a variety of different scores and metrics, but ended up using APLS, Path-based f1, Junction-based f1 and Sub-graph-based f1 at a relative scale of (0.35, 0.25, 0.25, 0.15). We found the Sub-graph-based f1 to be more sensitive to small perturbations and therefore weighted it less in the final combination. The metrics mentioned above are highly correlated, as examined in Batra et al. (2019). This correlation, though, holds when comparing the final predictions. Intermediate incremental rewards are more independent, so we still found it useful to use a mixture of them. Initially, to let our network learn basic stable rewards, we use the segmentation prediction mask as target. That means that we train our model to predict the graph that can be extracted after post-processing the segmentation model’s prediction.
After pre-training the autoregressive model, we experimented with fine-tuning using RL with two different learning rates, where a slower by a factor (0 − 1] rate was chosen for the pre-trained modules. Here, we noticed that the model still performed better than the ARM baseline. As it has trouble though to escape the autoregressive order, compared to the single learning rate model, results are less optimal.
We finally note that by avoiding type and position encoding in the Transformer II module, we can ensure the embedded graph is permutation invariant regarding the sequence of edges and the order of key points within an edge. Our search graph can then be formulated into a directed acyclic graph, circumventing unnecessary division of the search space (Browne et al., 2012; Childs et al., 2008), enabling more efficient sampling (Saffidine et al., 2012). These updated search statistics are cumbersome to compute, though, and we found no significant efficiency improvement. They do, however, confirm our model’s potential ability to handle the input graph as an unordered set, as the problem suggests.
E.4 PRODUCING KEY POINTS
We initially train a segmentation model for predicting pixel-level accurate masks of the road network. For this step, we can use any model from the literature. We extract the predicted graph by
skeletonizing the predicted mask and simplifying the graph by a smoothing threshold. We then sample intermediate vertices along the largest in terms of ground length edges, to enlarge the action space. We illustrate a toy example of such a process in Fig. 15. To accelerate inference, we can also initialize our prediction graph based on the provided segmentation mask. In such a case, our method closer resembles previous refinement approaches. We additionally remove edges of connected components with small overall size and edges belonging to roads segments leading to dead ends (that means vertices of degree one), keeping though the corresponding key points in the environment state. Thus, if our model deems the existence of the respective edges necessary, it can add them once more. We plan to further investigate augmenting the action space with the ability to remove edges in future work, that would not require such a pre-processing strategy.
E.5 COMBINING PREDICTIONS
When creating the final per image prediction, we initially simply generated predictions on nonoverlapping patches and fused them together. To overcome small pixel location differences in the predicted graphs, we fuse by rasterizing the individual graphs in the pixel domain with a line width larger than 1. What we found more successful was to perform inference on overlapping patches and to initialize the currently predicted graph based on the predictions made so far. This is particularly useful, as road segments are often close to the boundaries of our cropped image. Individual inference and simple fuse can often lead to over-connected predictions. We visualize a toy example of such a process in Fig. 16.
Inference Simple Fusion
For the segmentation baselines, unless specified in their respective documentation, we perform inference by cropping images to overlapping patches and normalizing the final predicted mask based on the number of overlapping predictions per pixel location. We also pad images around their boundary, as done in Acuna et al. (2019). We note some small differences in the final scores for the Orientation model (Batra et al., 2019) and the SpaceNet dataset, compared to the ones in Citraro et al. (2020). We assume these are an outcome of different chosen parameters for the calculation of metrics. We keep these parameters fixed when calculating scores for all methods.
E.6 MORE COMPARISONS WITH BASELINES
We elaborate more on the evaluation method on Sat2Graph. The authors provided predictions corresponding only to a center crop of the original SpaceNet dataset images. For each 400× 400 pixel image, predictions are made for the center 352 × 352 area of the image. One could expect slightly better results if trained in the same conditions but that the gap does still seem large enough to show the merits of our approach.
Other baselines like Neural turtle graphics (Chu et al., 2019) and Topological Map Extraction (Li et al., 2019b) do not have an implementation available. We do not compare against VecRoad (Tan et al., 2020) or RoadTracer (Bastani et al., 2018), as different datasets were used for the current evaluations. These baselines have been already shown to underperform though in the literature, by methods that we are comparing against.
F MORE EXAMPLES
We showcase in Fig. 17 and Fig. 18 more examples of the environment state progression, for the synthetic dataset. | 1. What is the main contribution of the paper regarding road vectorization using reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to recent relevant works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the pipeline structure, experimental design, or lack of attempts to adapt the framework? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a reinforcement learning approach to road vectorization. In contrast with previous works, the task is modeled as generating a graph as a variable-length edge sequence.
The pipeline consists of multiple stages: semantic segmentation from RGB images (this results in binary maps), a transformer-based autoregression model (results in a collection of road graphs) and the RL part- muZero adapted for road graph extraction. It achieves competitive performance on DeepGlobe and SpaceNet datasets.
The authors also introduce a synthetic dataset that helps pretraining the autoregression model (need to clarify with the authors on that).
Strengths And Weaknesses
Strengths
competitive performance on two public datasets
interesting synthetic dataset with path occlusion difficulty analysis
Weaknesses
more recent [relevant] related work that yields much improved results such as RNGDet (ignored) or VecRoad (different split, mentioned but ignored) is not compared against; what do you mean by "these baselines have been already shown to underperform though in the literature"? Who has shown that and where?
small performance improvements on both datasets, seems to struggle at times, sometimes beaten by a 2017 paper
convoluted pipeline, RGB segmentation >> point extraction from sampling >> auto-regression model >> muZero; feels like the whole system was glued in place to fix the problems from the previous step (e.g., the sampled keypoints are not perfect, throw in the auto-regression transformer model; its graphs are not that great, throw in the RL to figure out which graphs are usable); why not try predicting the graph directly?
using RL for navigation / computing routes is not novel; for example, [1*] presented a similar concept, but with street views instead of satellite images.
[1*]Mirowski, P., Grimes, M., Malinowski, M., Hermann, K. M., Anderson, K., Teplyashin, D., ... & Hadsell, R. (2018). Learning to navigate in cities without a map. Advances in neural information processing systems, 31.
Clarity, Quality, Novelty And Reproducibility
Clarity/quality:
As previously stated, the pipeline consist of a number of methods mashed together. Even an overview is missing - Fig 1 describes muZero modeling, which is a tiny bit of the whole pipeline. IMHO the authors need to test a modern architecture, such as RNGDet and check if RL improves on top of that. If not, well, RL is not the way to go for this task. Especially looking at the RNGNet results, I do not see compelling evidence the authors would be able to come close to those numbers, even with the complex system they built on top of the (already existing) graphs.
The paper has only minor text issues - e.g., Fig 6. "imporovement" x2, it is well written and structured.
Novelty:
All in all, I do not think this paper in its current form is ICLR-worthy. It is more of an engineering work that struggles to marginally improve the results. Judging from the ablation study, they start from a terrible baseline, which again, in 2022, it is unacceptable. If my calculator is right, the 0.455 ALPS is the method without any improvements. Why? If the focus is the RL agent, why not start with the best graphs possible and improve over them? Is it something here that I am missing?
The synthetic dataset could be used to improve the results and show a compelling advantage for RL, but it's not. In fact, it's not even clear on how it's used, they claim the generation of a 'difficulty' score and training with LinkNet [a 2017 method], but only Figure 6 supports this claim, and no other attempts to find a meaningful use are described (e.g., is the autoregressive pretrained done on this dataset? What about other ideas, such as style transfer, or use to train the RL agent? Or is it this already done but not mentioned?).
Few recent approaches have attempted to upgrade the algorithms to modern pipelines (e.g., directly output vertices similar to [1*,2*], without intermediate keypoints/road segmentation, as in RNGDet/RNGDet++[3*- this paper is here only for future reference]), it would have been nice to adapt the framework, but no efforts were made.
Reproducibility:
The code is released in the supplementary material, it should be easy to reproduce the experiments.
[1*]Lazarow, J., Xu, W., & Tu, Z. (2022). Instance Segmentation With Mask-Supervised Polygonal Boundary Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4382-4391).
[2*]Liang, J., Homayounfar, N., Ma, W. C., Xiong, Y., Hu, R., & Urtasun, R. (2020). Polytransform: Deep polygon transformer for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9131-9140).
[3*]Xu, Z., Liu, Y., Sun, Y., Liu, M., & Wang, L. (2022). RNGDet++: Road Network Graph Detection by Transformer with Instance Segmentation and Multi-scale Features Enhancement. arXiv preprint arXiv:2209.10150. |
ICLR | Title
Mastering Spatial Graph Prediction of Road Networks
Abstract
Accurately predicting road networks from satellite images requires a global understanding of the network topology. We propose to capture such high-level information by introducing a graph-based framework that simulates the addition of sequences of graph edges using a reinforcement learning (RL) approach. In particular, given a partially generated graph associated with a satellite image, an RL agent nominates modifications that maximize a cumulative reward. As opposed to standard supervised techniques that tend to be more restricted to commonly used surrogate losses, these rewards can be based on various complex, potentially noncontinuous, metrics of interest. This yields more power and flexibility to encode problem-dependent knowledge. Empirical results on several benchmark datasets demonstrate enhanced performance and increased high-level reasoning about the graph topology when using a tree-based search. We further highlight the superiority of our approach under substantial occlusions by introducing a new synthetic benchmark dataset for this task.
1 INTRODUCTION
Road layout modelling from satellite images constitutes an important task of remote sensing, with numerous applications in and navigation. The vast amounts of data available from the commercialization of geospatial data, in addition to the need for accurately establishing the connectivity of roads in remote areas, have led to an increased interest in the precise representation of existing road networks. By nature, these applications require structured data types that provide efficient representations to encode geometry, in this case, graphs, a de facto choice in domains such as computer graphics, virtual reality, gaming, and the film industry. These structured-graph representations are also commonly used to label recent road network datasets (Van Etten et al., 2018) and map repositories (OpenStreetMap contributors, 2017). Based on these observations, we propose a new method for generating predictions directly as spatial graphs, allowing us to explicitly incorporate geometric constraints in the learning process, encouraging predictions that better capture higher-level dataset statistics.
In contrast, existing methods for road layout detection, mostly rely on pixel-based segmentation models that are trained on masks produced by rasterizing ground truth graphs. Performing pixelwise segmentation, though, ignores structural features and geometric constraints inherent to the
problem. As a result, minimum differences in the pixel-level output domain can have significant consequences in the proposed graph, in terms of connectivity and path distances, as manifested by the often fragmented outputs obtained after running inference on these models. In order to address these significant drawbacks, we propose a new paradigm where we: (i) directly generate outputs as spatial graphs and (ii) formalize the problem as a game where we sequentially construct the output by adding edges between key points. These key points can in principle come from any off-the-shelf detector that identifies road pieces with sufficient accuracy. Our generation process avoids having to resort to cumbersome post-processing steps (Batra et al., 2019; Montoya-Zegarra et al., 2015) or optimize some surrogate objectives (Máttyus & Urtasun, 2018; Mosinska et al., 2018) whose relation to the desired qualities of the final prediction is disputed. Concurrently, the sequential decision-making strategy we propose enables us to focus interactively on different parts of the image, introducing the notion of a current state and producing reward estimates for a succession of actions. In essence, our method can be considered as a generalization of previous refinement techniques (Batra et al., 2019; Li et al., 2019b) with three major advantages: (i) removal of the requirement for greedy decoding, (ii) ability to attend globally to the current prediction and selectively target parts of the image, and (iii) capacity to train based on demanding task-specific metrics.
More precisely, our contributions are the following:
• We propose a novel generic strategy for training and inference in autoregressive models that removes the requirement of decoding according to a pre-defined order and refines initial sampling probabilities via a tree search.
• We create a new synthetic benchmark dataset of pixel-level accurate labels of overhead satellite images for the task of road network extraction. This gives us the ability to simulate complex scenarios with occluded regions, allowing us to demonstrate the improved robustness of our approach. We plan to release this dataset publicly.
• We confirm the wide applicability of our approach by improving the performance of existing methods on the popular SpaceNet and DeepGlobe datasets.
2 RELATED WORK
Initial attempts to extract road networks mainly revolved around handcrafted features and stochastic geometric models of roads (Barzohar & Cooper, 1996). Road layouts have specific characteristics, regarding radiometry and topology e.g. particular junction distribution, certain general orientation, and curvature (see Fig. 2), that enable their detection even in cases with significant occlusion and uncertainty (Hinz & Baumgartner, 2003). Modern approaches mostly formulate the road extraction task as a segmentation prediction task (Lian et al., 2020; Mattyus et al., 2015; Audebert et al., 2017) by applying models such as Hourglass (Newell et al., 2016) or LinkNet (Chaurasia & Culurciello, 2017). This interpretation has significant drawbacks when evaluated against structural losses, because of discontinuities in the predicted masks. Such shortcomings have been addressed by applying some additional post-processing steps, such as high-order conditional random fields (Niemeyer et al., 2011; Wegner et al., 2013) or by training additional models that refine these initial predictions (Máttyus et al., 2017; Batra et al., 2019). Other common techniques include the optimization of an ensemble of losses. Chu et al. (2019) rely on a directional loss and use non-maximal suppression as a thinning layer, while Batra et al. (2019) calculate orientations of road segments. Although such auxiliary losses somewhat improve the output consistency, the fundamental issue of producing
predictions in the pixel space persists. It remains impossible to overcome naturally occurring road network structures, e.g. crossings of roads in different elevations, see Fig. 3.
Previous failure cases have led to more intuitive conceptualizations of the task. Roadtracer (Bastani et al., 2018), iteratively builds a road network, similar to a depth-first search approach, while Chu et al. (2019) learn a generative model for road layouts and then apply it as a prior on top of a segmentation prediction mask. Proposed graph-based approaches, encode the road network directly as a graph, but either operate based on a constrained step-size (Tan et al., 2020) to generate new vertices or operate on a single step (He et al., 2020; Bandara et al., 2022), involving use-defined thresholding to post-process the final predictions. Most similar to our work, Li et al. (2019b) predict locations of key points and define a specific order traversing them, also similarly Xu et al. (2022). Such autoregressive models have been recently successfully applied with the use of transformers (Vaswani et al., 2017) in a range of applications (Nash et al., 2020; Para et al., 2021a;b; Xu et al., 2022) to model constraints between elements, while their supervised training explicitly requires tokens to be processed in a specific order. This specific order combined with the fact that only a surrogate training objective is used, introduces limitations, discussed further in the next section. In order to eliminate this order requirement and to optimize based on the desired metric, while attending globally to the currently generated graph, we propose to use RL as a suitable alternative.
When generating discrete outputs, an unordered set of edges (Zaheer et al., 2017), it is challenging to adapt existing learning frameworks to train generative models (Para et al., 2021b). Instead of optimizing in the image space, however, we are interested in optimizing spatial structured losses by learning program heuristics, i.e. policies. RL has found success in the past in computer vision applications (Le et al., 2021), but mainly as an auxiliary unit with the goal of improving efficiency (Xu et al., 2021) or as a fine-tuning step (Qin et al., 2018). We instead rely on RL to produce the entire graph exploiting the ability of the framework for more high-level reasoning.
3 METHODOLOGY
We parametrize a road network as a graph G = {V, E} with each vertex vi = [xi, yi]⊤ ∈ V representing a key point on the road surface. The set of edges (vi, vj) ∈ E , corresponds to road segments connecting these key points. We can then generate a probability distribution over roads by following a two-step process: i) generation of a set of vertices and ii) generation of a set of edges connecting them. Formally, for an image I, a road network R is derived as:
R = argmax V,E P (V, E | I) = P (E | V, I)P (V | I). (1)
The graph nodes typically correspond to local information in an image, and we therefore resort to a CNN-based model to extract key points, providing the set V ′, that sufficiently captures the information in the ground truth graph G. The construction of edges, however, requires higher-level reasoning that can cope with parallel roads, junctions, occlusions, or poor image resolution, among other difficulties.
Considering probabilistic models over sequences and using the chain rule, we can factorize the joint distribution as the product of a series of conditional distributions
P (E | V, I;σ) = NE∏ n=1 P (eσ(n) | e<σ(n),V, I), (2)
where e<σ(n) represents eσ(1), eσ(2), . . . , eσ(n−1) and σ ∈ SNE denotes the set of all permutations of the integers 1, 2, . . . , NE , with NE the number of edges. For our work, we consider the setting where these sequences are upper bounded in length, i.e. NE ≤ Nmax, a reasonable assumption when dealing with satellite images of fixed size. Autoregressive models (ARMs) have been used to solve similar tasks in the past by defining a fixed order of decoding (Oord et al., 2016; van den Oord et al., 2016; Nash et al., 2020; Para et al., 2021a). In our case, this would correspond to sorting all key points by their x and y locations and generating edges for each of them consecutively. We call this the autoregressive order. There are, however, two major drawbacks.
First, the evaluation metrics used for this task define a buffer region in which nodes in the ground truth and the predicted graph are considered to be a match. Therefore, a newly generated edge can be only partially correct, when only partially overlapping with the ground truth graph. This nonsmooth feedback comes in clear contrast to the supervised training scheme of ARMs, minimization of the negative log-likelihood, that assumes perfect information regarding the key points’ locations, i.e. that the sets V and V ′ are the same. In practice, this condition is rarely met, as the exact spatial graph can be represented in arbitrarily many ways by subdividing long edges into smaller ones or due to small perturbation to key points’ locations. It is thus imperative that our model can estimate the expected improvement of adding selected edges, which implicitly can also signal when to appropriately end the generation process.
Second, the requirement to decode according to the autoregressive order introduces a bias and limits the expressiveness of the model (Uria et al., 2014). As a result, it can lead to failures in cases with blurry inputs or occlusions (Li et al., 2019b). Previous solutions include the use of beam search, either deterministic or stochastic (Meister et al., 2021). Beam search does not however eliminate the bias introduced in the selection order of the key points, while suffering from other deficiencies, such as degenerate repetitions (Holtzman et al., 2019; Fan et al., 2018). In order to address these shortcomings, we advocate for a permutation invariant strategy. We present a novel generic strategy, which improves autoregressive models without requiring significantly more computational cost.
3.1 AUTOREGRESSIVE MODEL
We start by introducing a base autoregressive model, illustrated in Fig. 4. Given an image and a set of key points, our model produces a graph by sequentially predicting a list of indices, corresponding to the graph’s flattened, unweighted edge-list. Each forward pass produces probabilities over the set of key points, which leads to a new action after sampling. A successive pair of indices defines an edge as its two endpoints. A special end-of-sequence token is reserved to designate the end of the generation process.
Following Wang et al. (2018); Smith et al. (2019), we begin by extracting visual features per key point, by interpolating intermediate layers of a ResNet backbone to the key points’ locations, which
are further augmented by position encodings of their locations. We then further process these features using two lightweight Transformer modules. The first transformer (Transformer I in Fig. 4) encodes the features of the key points as embeddings. The second transformer (Transformer II in Fig. 4) takes as input the currently generated edge list sequence, corresponding to the currently partially generated graph. Edges are directly mapped to the embeddings of their comprising key points, supplemented by position and type embeddings, to differentiate between them, as shown in Fig. 5 (a). An additional global image embedding, also extracted by the ResNet, is used to initialize the sequence. The Transformer II module produces a single hidden state, which is linked with theNV′ +1 (corresponding to the provided key points, supplemented by the special end of the generation token) key points’ embeddings by a pointer network (Vinyals et al., 2015), via a dot-product to generate the final distribution. This allows a variable number of actions that depends on the current environment state, instead of using a fixed action space.
3.2 AUGMENTED SEARCH
In order to address the problems of greedy decoding (analysed Section 3), we frame our road extraction task as a classical Markov-decision process (MDP). The generation of a graph for every image defines an environment, where the length of the currently generated edge list determines the current step. Let ot, αt and rt correspond to the observation, the action and the observed reward respectively, at time step t. The aim is to search for a policy that maximizes the expected cumulative reward over a horizon T , i.e., maxπ J(π) := Eπ[ ∑T−1 t=0 γ
trt] where γ ∈ (0, 1] indicates the discount factor and the expectation is with respect to the randomness in the policy and the transition dynamics. We set the discount factor to 1 due to the assumed bounded time horizon, and we note that although the dynamics of the environment are deterministic, optimizing the reward remains challenging.
Each action leads to the selection of a new key point, with new edges being added once every two actions. The addition of a new edge leads to a revision of the predicted graph and triggers an intermediate reward
rt = sc(Ggt,Gpredt)− sc(Ggt,Gpredt−1), (3)
where sc(Ggt,Gpredt) is a similarity score between the ground truth graph Ggt and the current estimate Gpredt . Discussion of the specific similarity scores used in practice is postponed for Section 3.3. A proper spatial graph generation entails (i) correct topology and (ii) accurate location prediction of individual roads. For the latter, intermediate vertices of degree 2 are essential. We call a road segment (RS), an ordered collection of edges, between vertices of degree d(.) two (or a collection of edges forming a circle):
RS = {(vrs1 ,vrs2), . . . , (vrsk−1 ,vrsk)} s.t (vrsi ,vrsi+1) ∈ E for i = 1, . . . , k − 1 d(vrsi) = 2, for i = 2, . . . k − 1, (d(vrs1) ̸= 2 and d(vrsk) ̸= 2 or vrs1 = vrsk).
During the progression of an episode (i.e. the sequential generation of a graph), the topological nature of the similarity scores in Eq. 3 implies that the effect of each new edge to the reward will be reflected mostly once its whole corresponding road segment has been generated. To resolve the ambiguity in the credit assignment and allow our agent to look ahead into sequences of actions, we rely on Monte Carlo Tree Search (MCTS) to simulate entire sequences of actions. We use a state-of-the-art search-based agent, MuZero (Schrittwieser et al., 2020), that constructs a learnable model of the environment dynamics, simulating transitions in this latent representation and leading to significant computational benefits.
Specifically, MuZero requires three distinct parts (see also Fig. 5):
1. A representation function f that creates a latent vector of the current state ht = fθ(ot). For this step, we use the autoregressive model, as shown in Fig. 4. Our current latent representation ht contains the graph’s hidden state, along with the key points’ embeddings used to map actions to latent vectors. As key points remain the same throughout the episode, image-based features (Components (1) and (2) in Fig. 4) are only computed once.
2. A dynamics network g, we use a simple LSTM (Hochreiter & Schmidhuber, 1997), that predicts the effect of a new action by predicting the next hidden state and the expected reward: (ĥt, r̂t) = gθ(h̃t−1, αt). We can replace h̃t−1 with the latent representation ht−1, or its previous computed approximation ĥt−1 for tree search of larger depth larger than 1.
3. A prediction network ψ, that estimates the policy and the value for the current state (pt+1, vt) = ψθ(h̃t). We compute the policy via a pointer network, as described in Section 3.1. Value estimates are produced by a simple multi-layer network.
The dynamics network guides the search and evaluates the expected reward of actions. For every newly generated edge, we also explicitly inform the network regarding the creation of new intersections and the expected relative change in the overall road surface generated via embeddings (see Fig. 5). By using the dynamics network, we bypass the expensive call to the decoder module during the search, and can instead approximate small modifications in the latent representation directly. For our experiments, the dynamics network requires up to 90 times less floating-point operations to simulate trajectories, compared to using the edge embeddings’ decoder. Effectively, our method does not involve significantly more computation budget compared to the base autoregressive model.
3.3 EVALUATION METRICS
We adopt the same evaluation metrics both as a comparison between different methods but also as the incremental rewards for our agent, by Eq. 3. We use the relaxed versions of precision, recall and intersection over union for pixel-level predictions Correctness/Completeness/Quality (CCQ) (Wiedemann et al., 1998; Wang et al., 2016). As graph-theoretic metrics we use APLS (Van Etten et al., 2018) and additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. More details can be found in Appendix E.1
4 EXPERIMENTS
Implementation details We resize images to 300 × 300 pixels, standardizing according to the training set statistics. For exploration, we initialize workers using Ray (Moritz et al., 2017) that execute episodes in the environment. For training, we unroll the dynamics function for td = 5 steps and use priority weights for the episode according to the differences between predicted and target values. Our algorithm can be considered as an approximate on-policy TD(λ) (Sutton & Barto, 2018) due to the relatively small replay buffer. We reanalyse older games (Schrittwieser et al., 2020) to provide fresher target estimates. Unvisited graph nodes are selected based on an upper confidence score, balancing exploration with exploitation, similar to Silver et al. (2018). We add exploration noise as Dirichlet noise and select actions based on a temperature-controlled sampling procedure, whose temperature is reduced during training.
Given the limited high-quality available ground truth labels (Singh et al., 2018) and to accelerate training, we employ modifications introduced in EfficientZero (Ye et al., 2021). We investigate adding supervision to the environment model and better initialize Q-value predictions similar to the implementation of Elf OpenGo (Tian et al., 2019). We further scale values and rewards using an
invertible transform inspired by Pohlen et al. (2018). Here, we predict support, as fully connected networks are biased towards learning low-frequency representations (Jacot et al., 2018). Selecting new actions involves generating simulations that can be done expeditiously given the small dimension of the latent space and the modest size of the dynamics network. Finally, to generate key points, we skeletonize segmentation masks provided by any baseline segmentation model, by thresholding the respective segmentation masks produced and applying RDP-simplification (Douglas & Peucker, 1973; Ramer, 1972). Selecting an appropriate threshold and subdividing larger edges guarantees that the generated set V ′ adequately captures most of the ground truth road network, leaving the complexity of the problem for our model to handle.
4.1 SYNTHETIC DATASET
We generate a dataset of overhead satellite images of a synthetic town using CityEngine1. We randomly specify vegetation of varying height and width along the side walks of the generated streets, leading inadvertently to occlusions of varying difficulty. The simulated environment allows specifying pixel-perfect masks regarding both roads and trees occluding the road surface based on the provided camera parameters (Kong et al., 2020). We can hence tune the complexity of the task and quantify the benefits of our approach for varying levels of difficulty. We defer more details regarding the generation process and dataset examples to the supplementary material.
We compare our method by training on our dataset a LinkNet model (Chaurasia & Culurciello, 2017), a popular segmentation model that has been widely used in the remote sensing community (Li et al., 2019a). Even in this synthetic and thus less diverse scenario, the deficiency of segmentation models to rely mostly on local information, with no explicit ability for longer-range interactions, is evident. Fig. 6, illustrates examples of such over-segmented predictions and how our approach can improve on them. We also define a ’difficulty’ attribute per synthetic satellite image, quantifying the occlusions as a percentage of the ground truth road mask covered. We observe a considerable absolute improvement in topological metric scores when training our model on this synthetic dataset, compared to the LinkNet baseline, for varying image difficulty.
4.2 REAL DATASETS
We evaluate our method on the SpaceNet and DeepGlobe datasets. We use the same train-test splits as in Batra et al. (2019) to promote reproducibility, while results are reported for the final combined graph on the original image scale. No pre-training on the synthetic dataset takes place. Further details regarding pre-processing are available in the Appendix E.2
1https://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
4.2.1 COMPARISON TO BASELINES
We first verify that under the effectiveness of the proposed approach under an ideal scenario where the key points conditioned upon, correspond to the ones from the ground truth. In the interest of space we point the reader to Appendix A and Table 3. Subsequently, we move to the primary task of predicting spatial graphs without the ground-truth graph information but extract key points via the aforementioned process and train using the described topological metrics directly. The previous baselines are not applicable in this case, due to lack of ground truth information, so we instead compare against the following; we explore powerful CNN architectures, by training a Segmentation model with a ResNet backbone. We evaluate DeepRoadMapper (Máttyus et al., 2017), a model that refines previous segmentation by adding connections along possible identified paths. As done by Batra et al. (2019) we notice that in complex scenarios, the effect of this post-processing step is not always positive. We also evaluate against LinkNet (Chaurasia & Culurciello, 2017), and Orientation (Batra et al., 2019), which is trained to predict simultaneously road surfaces and orientation.
Quantitative results in Table 1 and visual inspection in Table 2, affirm that the global context and the gradual generation incite a better understanding of the scene, leading to consistently outperforming topological metric results compared to the baselines. We remark that our predictions are more topologically consistent with fewer shortcomings, such as double roads, fragmented roads, and overconnections. This is further supplemented by comparing the statistics of the predicted spatial graphs in Fig. 7. We further showcase the transferability of our model by employing it with no fine-tuning (apart from dataset-specific image normalization) on the DeepGlobe dataset. We can refine previous predictions by adding missing edges, leading to more accurate spatial graph predictions, as shown in Table 1. This confirms our conjecture that road structures and geometric patterns are repeated across diverse cities’ locations.
4.2.2 ABLATION STUDY
We experimented attending to image features for the two transformer modules by extracting perpatch visual features from the conditioning image H img = [himg1 ,h img 2 , . . . ], as done in the Vision Transformer (Dosovitskiy et al., 2020). This did not lead to significant improvements, which we attribute to over-fitting. In Fig. 8 we highlight the relative importance of some additional components for the final predictions. As efficiency is also of particular importance to us, we further visualize the effect of varying the simulation depth of the dynamics network during training. Surprisingly
perhaps, our method performs consistently better than baselines, even for a small overall simulation length, as this already enables better policy approximations.
In Appendix A we provide incremental results for the task of predicting road networks based on a optimal set of key points. In Appendix B we provide insights concerning interpretability and further comparison to baselines based on the varying difficulty of the predicted underlying road networks. In Appendix C we give more information regarding the generation of the synthetic dataset, while in Appendix D more information regarding the model architecture. Final in Appendix E we provide more implementation decisions, including details on exactly how key points and generated and how individual patch-level predictions are fused together. More examples of full environment trajectories are given in Appendix F. We stress that our method can act on partially initialized predictions, registering it also as a practical refinement approach on top of any baseline. Initializing our model according to the ARM model allows a moderately quick fine-tuning phrase. In combination with the learned environment model, which circumvents expensive calls to the edge embedding model for each simulation step in the MCTS, allows us to train even on a single GPU.
5 CONCLUSIONS
We presented a novel reinforcement learning framework for generating a graph as a variable-length edge sequence, where a structured-aware decoder selects new edges by simulating action sequences into the future. Importantly, this allows the model to better capture the geometry of the targeted objects. This is confirmed by our experimental results, since our approach consistently produces more faithful graph statistics. One advantage of the proposed method is that the reward function is based on (non-continuous) metrics that are directly connected to the application in question. Our approach does not require significantly more computational resources compared to state-of-the-art supervised approaches, while in addition, it can be used to refine predictions from another given model. We also remark that the direct prediction of a graph enables the concurrent prediction of meta-information about the edges, including, for instance, the type of road (highway, primary or secondary street, biking lane, etc).
Our approach opens the door to several directions for future work. For example, we have assumed that a pre-defined model gives the location of key points, but one could instead augment the action space to propose new key points’ locations. Other promising directions include the direct prediction of input-dependent graph primitives, e.g. T-junctions or roundabouts. Finally, we emphasize that our approach is suitable to a wide variety of applications where autoregressive models are typically used, and it is of special interest when there is a need for complex interactions or constraints between different parts of an object.
6 REPRODUCIBILITY STATEMENT
We have taken multiple steps to ensure reproducibility of the experiments. We refer the reader to Appendix E for a complete description of the training protocol. We have also released the code as part of the supplementary material, including scripts on how to reproduce our results.
A MORE EXPERIMENTS
We first assess the performance of our proposed method in an ideal scenario where the key points, correspond to the ones from the ground truth. To hinder training and inference, we insert additional key points as (1) random intermediate points between known edges and (2) randomly sampled locations in the images. Here, our assumption in Section 3 that the set V ′ suffices to generate the ground truth graph, holds by construction. We compare our method against several baselines that learn to connect edges between key points, using the same feature extraction pipeline, described in Section 3.1, as our model. Cls is a classification network that predicts for all pairs of key points a value {0, 1} corresponding to the existence of an edge. GCN implements a graph neural network that predicts directly the adjacency matrix. We also present an autoregressive version of our model ARM, that is trained with cross-entropy loss to predict the pre-defined ordered sequence of key points. We use this model to initialize ours. Results are presented in Table 3.
As expected, the ARM model achieves a low perplexity score when evaluated against the corresponding sequence, ordered according to the autoregressive order, but suffers in predicting the edges when in random order. The ARM underperforms because of frequent early terminations and the implicit inability to revisit key points, what the desired final metric is concerned, here APLS. Even though our model is developed upon this autoregressive model, it generates tokens in an arbitrary arrangement. Reward and value estimates enable a different training scheme that deeply correlates with the desired objective.
B INTERPRETABILITY
We visualize attention (of the Transformer II module), using the attention flow proposed in Abnar & Zuidema (2020), in Fig. 9. To create attention scores per edge, we aggregate scores for the pair of tokens that define each edge. New predictions lay increased attention to already generated junctions, parallel road segments, and other edges belonging to the same road segment.
We also compare APLS results achieved by varying the difficulty of the ground truth images in terms of the total number of junctions (vertices with a degree greater than 2) and in terms of the average length of road segments that are present, in Fig. 10. Our method explicitly captures information re-
garding the degree of the key points during the search, while it can encode better global information, even across larger distances. It is not a surprise perhaps then, that it outperforms the baselines more convincingly as the difficulty of the ground truth road network increases.
Finally, we visualize an example of an imagined rollout trajectory at a single step of our algorithm in Fig. 11. During a single inference step, our method uses tree search to look ahead into sequences of actions in the future. For our example, we have chosen a relatively smaller number of simulations (10) for better visual inspection. We also show the corresponding environment states reached, which are, however, not explicitly available to the model, as it is searching and planning using a learned model of the environment.
C DATASET CREATION
We use CityEngine, a 3D modelling software for creating immersive urban environments. We generate a simple road network and apply a rural city texture on the created city blocks, provided by Kong et al. (2020). We then uniformly generate trees of varying height and size along the side walks of the generated streets. We then iteratively scan the generated city by passing a camera of specific orientation and height. We repeat the same process after suitable modifications to the texture, for the generation of the street masks, as well as the vegetation masks, that correspond to only the plants along the side walks. Some examples of the generated images are provided in Fig. 12. We note that additional occlusion can be caused by the relation of the camera with the 3D meshes corresponding to buildings. These occlusions are, however, not captured by our generated masks, and we can expect them to contribute partially to the fragmented segmentation results.
We train a segmentation-based model, LinkNet, as our baseline. We rasterize the ground truth graph to create pixel-level labels and train by maximizing the intersection over union, which is commonly done in practice. We note that there is a tradeoff between the nature of the predictions and the choice of the line-width with which the ground truth graph is rasterized. A large width achieves better results in terms of connectivity of the predicted graph but results in poorer accuracy in the final key points’ locations. Furthermore, when providing a large width, areas in the image with more uncertainty, e.g. vegetation that is not above a road segment, are also predicted as road networks with high certainty, leading to spurious, disconnected road segments. To highlight the advantages of our method compared to this baseline and in order to promote more meaningful predictions, we select a relatively smaller width.
D ARCHITECTURE DETAILS
As an image backbone model, we use a ResNet-18 for the synthetic dataset and a ResNet-50 for the real dataset experiments. We extract features at four different scales, after each of the 4 ResNet layers. To extract features for each key point, we interpolate the backbone feature maps based on the key points’ locations. We use different learned embeddings based on the actual key points’ locations. For the key points embedding model, we use a transformer encoder with 16 self-attention layers and a dropout rate of 0.15. We use layer normalization and GELU activation functions.
For the edge-embeddings model, we use the respective key points embedding, along with learned position and type embeddings, which we all sum together. As aforementioned, we can initialize the current edge sequence based on previous predictions, allowing our model to refine any initial prediction provided. Again, we use the same transformer architecture with 16 self-attention layers, and a dropout rate of 0.15.
Finally, the architecture of the dynamics network and the value prediction network are shown in Fig. 13. For the value estimation, we also provide the current environment step, as we execute steps in an environment with a bounded time horizon.
E IMPLEMENTATION DETAILS
E.1 EVALUATION METRICS
APLS (Van Etten et al., 2018) constitutes a graph theoretic metric that faithfully describes routing properties. APLS is defined as
APLS = 1− 1 Np ∑ pv1v2<∞ min { 1, |pv1v2 − pv1′v′2 | pv1v2 } , (4)
where v and v′ denote a source node and its closest point on the predicted graph if such exists within a buffer. Np denotes the number of paths sampled and pv1v2 the length of the shortest path between two nodes. Similarly, the Too Long Too Short (TLTS) metric (Wegner et al., 2013) compares lengths of the shortest paths between randomly chosen points of the two graphs, classifying them as infeasible, correct, or too-long or too-short (2l+2s) if the length of the path on the predicted graph does not differ by more than a threshold (5%) compared to the ground truth path. Since small perturbations to the predicted graph can have larger implications to pixel-level predictions, the definitions of precision, recall and intersection over union were relaxed in Wiedemann et al. (1998); Wang et al. (2016) leading to the metrics Correctness/Completeness/Quality (CCQ).
Still, some types of errors, such as double roads or over-connections, are not penalized from the above metrics (Citraro et al., 2020). We therefore additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. For the final similarity score used in Eq. 3, we use a linear combination of the aforementioned metrics, more details are available in the supplementary material.
E.2 DATASET INFORMATION
We use the following datasets to train our models, i.e. baselines and our newly proposed RL agent.
SpaceNet (Van Etten et al., 2018) includes a road network of over 8000 Km over four different cities: Vegas, Paris, Shanghai, and Khartoum, where the complexity and quality, and regularity of the road network depend on the city of origin. Satellite images are provided at a pixel resolution of 1300 × 1300, corresponding to a ground resolution of 30cm per pixel. We split the 2780 total images into crops of size 400×400 with an overlap of 100 pixels for training. To better highlight the diversity of the satellite images from these four different locations, we have included some randomly sampled examples in Fig. 14.
DeepGlobe (Demir et al., 2018) contains satellite images from 3 different locations with pixellevel annotations. Images have a resolution of 1024 × 1024, with a ground resolution of 50cm per pixel. We crop the 6226 images into tiles, leading to a similar ground truth resolution per pixel compared to SpaceNet.
E.3 TRAINING DETAILS
At each MCTS search step, we perform several simulations from the root state s0 for a number of steps k = 1, . . . and select an action that maximizes the upper confidence bound (Silver et al., 2018),
ak = argmax a
[ Q(s, a) + P (s, a) √∑ b N(s, b)
1 +N(s, a)
( c1 + log (∑ b N(s, b) + c2 + 1
c2
))] ,
where N(s, a), Q(s, a), P (s, a) corresponds to the visit counts, mean values and policies, as calculated by the current search statistics. Constants c1, c2 balance exploration and exploitation. Based on a state sk−1 and a selected action ak, a new state sk and reward r̂k are estimated through the dynamics network. We update the mean values based on bootstrapped values of the estimated value functions and rewards. We experimented with training the reward and value support predictions with both mean squared error (MSE) and cross-entropy loss. We opted for MSE because of its stability. For a more in depth description of the training scheme of MuZero we recommend Schrittwieser et al. (2020) and Ye et al. (2021).
As hinted in the main text, we train using intermediate rewards, a linear combination of topological metrics. We experimented using a variety of different scores and metrics, but ended up using APLS, Path-based f1, Junction-based f1 and Sub-graph-based f1 at a relative scale of (0.35, 0.25, 0.25, 0.15). We found the Sub-graph-based f1 to be more sensitive to small perturbations and therefore weighted it less in the final combination. The metrics mentioned above are highly correlated, as examined in Batra et al. (2019). This correlation, though, holds when comparing the final predictions. Intermediate incremental rewards are more independent, so we still found it useful to use a mixture of them. Initially, to let our network learn basic stable rewards, we use the segmentation prediction mask as target. That means that we train our model to predict the graph that can be extracted after post-processing the segmentation model’s prediction.
After pre-training the autoregressive model, we experimented with fine-tuning using RL with two different learning rates, where a slower by a factor (0 − 1] rate was chosen for the pre-trained modules. Here, we noticed that the model still performed better than the ARM baseline. As it has trouble though to escape the autoregressive order, compared to the single learning rate model, results are less optimal.
We finally note that by avoiding type and position encoding in the Transformer II module, we can ensure the embedded graph is permutation invariant regarding the sequence of edges and the order of key points within an edge. Our search graph can then be formulated into a directed acyclic graph, circumventing unnecessary division of the search space (Browne et al., 2012; Childs et al., 2008), enabling more efficient sampling (Saffidine et al., 2012). These updated search statistics are cumbersome to compute, though, and we found no significant efficiency improvement. They do, however, confirm our model’s potential ability to handle the input graph as an unordered set, as the problem suggests.
E.4 PRODUCING KEY POINTS
We initially train a segmentation model for predicting pixel-level accurate masks of the road network. For this step, we can use any model from the literature. We extract the predicted graph by
skeletonizing the predicted mask and simplifying the graph by a smoothing threshold. We then sample intermediate vertices along the largest in terms of ground length edges, to enlarge the action space. We illustrate a toy example of such a process in Fig. 15. To accelerate inference, we can also initialize our prediction graph based on the provided segmentation mask. In such a case, our method closer resembles previous refinement approaches. We additionally remove edges of connected components with small overall size and edges belonging to roads segments leading to dead ends (that means vertices of degree one), keeping though the corresponding key points in the environment state. Thus, if our model deems the existence of the respective edges necessary, it can add them once more. We plan to further investigate augmenting the action space with the ability to remove edges in future work, that would not require such a pre-processing strategy.
E.5 COMBINING PREDICTIONS
When creating the final per image prediction, we initially simply generated predictions on nonoverlapping patches and fused them together. To overcome small pixel location differences in the predicted graphs, we fuse by rasterizing the individual graphs in the pixel domain with a line width larger than 1. What we found more successful was to perform inference on overlapping patches and to initialize the currently predicted graph based on the predictions made so far. This is particularly useful, as road segments are often close to the boundaries of our cropped image. Individual inference and simple fuse can often lead to over-connected predictions. We visualize a toy example of such a process in Fig. 16.
Inference Simple Fusion
For the segmentation baselines, unless specified in their respective documentation, we perform inference by cropping images to overlapping patches and normalizing the final predicted mask based on the number of overlapping predictions per pixel location. We also pad images around their boundary, as done in Acuna et al. (2019). We note some small differences in the final scores for the Orientation model (Batra et al., 2019) and the SpaceNet dataset, compared to the ones in Citraro et al. (2020). We assume these are an outcome of different chosen parameters for the calculation of metrics. We keep these parameters fixed when calculating scores for all methods.
E.6 MORE COMPARISONS WITH BASELINES
We elaborate more on the evaluation method on Sat2Graph. The authors provided predictions corresponding only to a center crop of the original SpaceNet dataset images. For each 400× 400 pixel image, predictions are made for the center 352 × 352 area of the image. One could expect slightly better results if trained in the same conditions but that the gap does still seem large enough to show the merits of our approach.
Other baselines like Neural turtle graphics (Chu et al., 2019) and Topological Map Extraction (Li et al., 2019b) do not have an implementation available. We do not compare against VecRoad (Tan et al., 2020) or RoadTracer (Bastani et al., 2018), as different datasets were used for the current evaluations. These baselines have been already shown to underperform though in the literature, by methods that we are comparing against.
F MORE EXAMPLES
We showcase in Fig. 17 and Fig. 18 more examples of the environment state progression, for the synthetic dataset. | 1. What is the focus and contribution of the paper on reinforcement learning for graph generation?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and comprehensive description?
3. What are the weaknesses of the paper, especially regarding the significance and broad applicability of the use case?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a reinforcement learning framework for generating a graph as a variable-length edge sequence, where a structured-aware decoder selects new edges by simulating action sequences into the future. It reviews related works, explains the methodology, and presents the settings and results of experiments. The paper is followed by the Appendix with more details about the architecture, implementation, dataset creation and interoperability. There are also supplementary materials with the code which enable the reproducibility of experiments.
Strengths And Weaknesses
Strengths:
the introduced methodology seems to be quite novel
the description of the methodology and experiments is comprehensive and the writing is quite good and clear
supplementary materials with the code which enable the reproducibility of experiments
Weaknesses:
I have doubts regarding the importance of the use case (road network modelling from images) because nowadays, digital descriptions of road networks are usually generally available, but there are other works dealing with this topic and perhaps it's a good use case to demonstrate the introduced methodology. However, it would be good to demonstrate that the method can be useful in some other applications too.
Clarity, Quality, Novelty And Reproducibility
The quality of the presentation part is good and the writing is clear. The quality from the methodological perspective seems to be good too. The idea seems to be novel. Supplementary materials with the code enable the reproducibility of experiments. |
ICLR | Title
Mastering Spatial Graph Prediction of Road Networks
Abstract
Accurately predicting road networks from satellite images requires a global understanding of the network topology. We propose to capture such high-level information by introducing a graph-based framework that simulates the addition of sequences of graph edges using a reinforcement learning (RL) approach. In particular, given a partially generated graph associated with a satellite image, an RL agent nominates modifications that maximize a cumulative reward. As opposed to standard supervised techniques that tend to be more restricted to commonly used surrogate losses, these rewards can be based on various complex, potentially noncontinuous, metrics of interest. This yields more power and flexibility to encode problem-dependent knowledge. Empirical results on several benchmark datasets demonstrate enhanced performance and increased high-level reasoning about the graph topology when using a tree-based search. We further highlight the superiority of our approach under substantial occlusions by introducing a new synthetic benchmark dataset for this task.
1 INTRODUCTION
Road layout modelling from satellite images constitutes an important task of remote sensing, with numerous applications in and navigation. The vast amounts of data available from the commercialization of geospatial data, in addition to the need for accurately establishing the connectivity of roads in remote areas, have led to an increased interest in the precise representation of existing road networks. By nature, these applications require structured data types that provide efficient representations to encode geometry, in this case, graphs, a de facto choice in domains such as computer graphics, virtual reality, gaming, and the film industry. These structured-graph representations are also commonly used to label recent road network datasets (Van Etten et al., 2018) and map repositories (OpenStreetMap contributors, 2017). Based on these observations, we propose a new method for generating predictions directly as spatial graphs, allowing us to explicitly incorporate geometric constraints in the learning process, encouraging predictions that better capture higher-level dataset statistics.
In contrast, existing methods for road layout detection, mostly rely on pixel-based segmentation models that are trained on masks produced by rasterizing ground truth graphs. Performing pixelwise segmentation, though, ignores structural features and geometric constraints inherent to the
problem. As a result, minimum differences in the pixel-level output domain can have significant consequences in the proposed graph, in terms of connectivity and path distances, as manifested by the often fragmented outputs obtained after running inference on these models. In order to address these significant drawbacks, we propose a new paradigm where we: (i) directly generate outputs as spatial graphs and (ii) formalize the problem as a game where we sequentially construct the output by adding edges between key points. These key points can in principle come from any off-the-shelf detector that identifies road pieces with sufficient accuracy. Our generation process avoids having to resort to cumbersome post-processing steps (Batra et al., 2019; Montoya-Zegarra et al., 2015) or optimize some surrogate objectives (Máttyus & Urtasun, 2018; Mosinska et al., 2018) whose relation to the desired qualities of the final prediction is disputed. Concurrently, the sequential decision-making strategy we propose enables us to focus interactively on different parts of the image, introducing the notion of a current state and producing reward estimates for a succession of actions. In essence, our method can be considered as a generalization of previous refinement techniques (Batra et al., 2019; Li et al., 2019b) with three major advantages: (i) removal of the requirement for greedy decoding, (ii) ability to attend globally to the current prediction and selectively target parts of the image, and (iii) capacity to train based on demanding task-specific metrics.
More precisely, our contributions are the following:
• We propose a novel generic strategy for training and inference in autoregressive models that removes the requirement of decoding according to a pre-defined order and refines initial sampling probabilities via a tree search.
• We create a new synthetic benchmark dataset of pixel-level accurate labels of overhead satellite images for the task of road network extraction. This gives us the ability to simulate complex scenarios with occluded regions, allowing us to demonstrate the improved robustness of our approach. We plan to release this dataset publicly.
• We confirm the wide applicability of our approach by improving the performance of existing methods on the popular SpaceNet and DeepGlobe datasets.
2 RELATED WORK
Initial attempts to extract road networks mainly revolved around handcrafted features and stochastic geometric models of roads (Barzohar & Cooper, 1996). Road layouts have specific characteristics, regarding radiometry and topology e.g. particular junction distribution, certain general orientation, and curvature (see Fig. 2), that enable their detection even in cases with significant occlusion and uncertainty (Hinz & Baumgartner, 2003). Modern approaches mostly formulate the road extraction task as a segmentation prediction task (Lian et al., 2020; Mattyus et al., 2015; Audebert et al., 2017) by applying models such as Hourglass (Newell et al., 2016) or LinkNet (Chaurasia & Culurciello, 2017). This interpretation has significant drawbacks when evaluated against structural losses, because of discontinuities in the predicted masks. Such shortcomings have been addressed by applying some additional post-processing steps, such as high-order conditional random fields (Niemeyer et al., 2011; Wegner et al., 2013) or by training additional models that refine these initial predictions (Máttyus et al., 2017; Batra et al., 2019). Other common techniques include the optimization of an ensemble of losses. Chu et al. (2019) rely on a directional loss and use non-maximal suppression as a thinning layer, while Batra et al. (2019) calculate orientations of road segments. Although such auxiliary losses somewhat improve the output consistency, the fundamental issue of producing
predictions in the pixel space persists. It remains impossible to overcome naturally occurring road network structures, e.g. crossings of roads in different elevations, see Fig. 3.
Previous failure cases have led to more intuitive conceptualizations of the task. Roadtracer (Bastani et al., 2018), iteratively builds a road network, similar to a depth-first search approach, while Chu et al. (2019) learn a generative model for road layouts and then apply it as a prior on top of a segmentation prediction mask. Proposed graph-based approaches, encode the road network directly as a graph, but either operate based on a constrained step-size (Tan et al., 2020) to generate new vertices or operate on a single step (He et al., 2020; Bandara et al., 2022), involving use-defined thresholding to post-process the final predictions. Most similar to our work, Li et al. (2019b) predict locations of key points and define a specific order traversing them, also similarly Xu et al. (2022). Such autoregressive models have been recently successfully applied with the use of transformers (Vaswani et al., 2017) in a range of applications (Nash et al., 2020; Para et al., 2021a;b; Xu et al., 2022) to model constraints between elements, while their supervised training explicitly requires tokens to be processed in a specific order. This specific order combined with the fact that only a surrogate training objective is used, introduces limitations, discussed further in the next section. In order to eliminate this order requirement and to optimize based on the desired metric, while attending globally to the currently generated graph, we propose to use RL as a suitable alternative.
When generating discrete outputs, an unordered set of edges (Zaheer et al., 2017), it is challenging to adapt existing learning frameworks to train generative models (Para et al., 2021b). Instead of optimizing in the image space, however, we are interested in optimizing spatial structured losses by learning program heuristics, i.e. policies. RL has found success in the past in computer vision applications (Le et al., 2021), but mainly as an auxiliary unit with the goal of improving efficiency (Xu et al., 2021) or as a fine-tuning step (Qin et al., 2018). We instead rely on RL to produce the entire graph exploiting the ability of the framework for more high-level reasoning.
3 METHODOLOGY
We parametrize a road network as a graph G = {V, E} with each vertex vi = [xi, yi]⊤ ∈ V representing a key point on the road surface. The set of edges (vi, vj) ∈ E , corresponds to road segments connecting these key points. We can then generate a probability distribution over roads by following a two-step process: i) generation of a set of vertices and ii) generation of a set of edges connecting them. Formally, for an image I, a road network R is derived as:
R = argmax V,E P (V, E | I) = P (E | V, I)P (V | I). (1)
The graph nodes typically correspond to local information in an image, and we therefore resort to a CNN-based model to extract key points, providing the set V ′, that sufficiently captures the information in the ground truth graph G. The construction of edges, however, requires higher-level reasoning that can cope with parallel roads, junctions, occlusions, or poor image resolution, among other difficulties.
Considering probabilistic models over sequences and using the chain rule, we can factorize the joint distribution as the product of a series of conditional distributions
P (E | V, I;σ) = NE∏ n=1 P (eσ(n) | e<σ(n),V, I), (2)
where e<σ(n) represents eσ(1), eσ(2), . . . , eσ(n−1) and σ ∈ SNE denotes the set of all permutations of the integers 1, 2, . . . , NE , with NE the number of edges. For our work, we consider the setting where these sequences are upper bounded in length, i.e. NE ≤ Nmax, a reasonable assumption when dealing with satellite images of fixed size. Autoregressive models (ARMs) have been used to solve similar tasks in the past by defining a fixed order of decoding (Oord et al., 2016; van den Oord et al., 2016; Nash et al., 2020; Para et al., 2021a). In our case, this would correspond to sorting all key points by their x and y locations and generating edges for each of them consecutively. We call this the autoregressive order. There are, however, two major drawbacks.
First, the evaluation metrics used for this task define a buffer region in which nodes in the ground truth and the predicted graph are considered to be a match. Therefore, a newly generated edge can be only partially correct, when only partially overlapping with the ground truth graph. This nonsmooth feedback comes in clear contrast to the supervised training scheme of ARMs, minimization of the negative log-likelihood, that assumes perfect information regarding the key points’ locations, i.e. that the sets V and V ′ are the same. In practice, this condition is rarely met, as the exact spatial graph can be represented in arbitrarily many ways by subdividing long edges into smaller ones or due to small perturbation to key points’ locations. It is thus imperative that our model can estimate the expected improvement of adding selected edges, which implicitly can also signal when to appropriately end the generation process.
Second, the requirement to decode according to the autoregressive order introduces a bias and limits the expressiveness of the model (Uria et al., 2014). As a result, it can lead to failures in cases with blurry inputs or occlusions (Li et al., 2019b). Previous solutions include the use of beam search, either deterministic or stochastic (Meister et al., 2021). Beam search does not however eliminate the bias introduced in the selection order of the key points, while suffering from other deficiencies, such as degenerate repetitions (Holtzman et al., 2019; Fan et al., 2018). In order to address these shortcomings, we advocate for a permutation invariant strategy. We present a novel generic strategy, which improves autoregressive models without requiring significantly more computational cost.
3.1 AUTOREGRESSIVE MODEL
We start by introducing a base autoregressive model, illustrated in Fig. 4. Given an image and a set of key points, our model produces a graph by sequentially predicting a list of indices, corresponding to the graph’s flattened, unweighted edge-list. Each forward pass produces probabilities over the set of key points, which leads to a new action after sampling. A successive pair of indices defines an edge as its two endpoints. A special end-of-sequence token is reserved to designate the end of the generation process.
Following Wang et al. (2018); Smith et al. (2019), we begin by extracting visual features per key point, by interpolating intermediate layers of a ResNet backbone to the key points’ locations, which
are further augmented by position encodings of their locations. We then further process these features using two lightweight Transformer modules. The first transformer (Transformer I in Fig. 4) encodes the features of the key points as embeddings. The second transformer (Transformer II in Fig. 4) takes as input the currently generated edge list sequence, corresponding to the currently partially generated graph. Edges are directly mapped to the embeddings of their comprising key points, supplemented by position and type embeddings, to differentiate between them, as shown in Fig. 5 (a). An additional global image embedding, also extracted by the ResNet, is used to initialize the sequence. The Transformer II module produces a single hidden state, which is linked with theNV′ +1 (corresponding to the provided key points, supplemented by the special end of the generation token) key points’ embeddings by a pointer network (Vinyals et al., 2015), via a dot-product to generate the final distribution. This allows a variable number of actions that depends on the current environment state, instead of using a fixed action space.
3.2 AUGMENTED SEARCH
In order to address the problems of greedy decoding (analysed Section 3), we frame our road extraction task as a classical Markov-decision process (MDP). The generation of a graph for every image defines an environment, where the length of the currently generated edge list determines the current step. Let ot, αt and rt correspond to the observation, the action and the observed reward respectively, at time step t. The aim is to search for a policy that maximizes the expected cumulative reward over a horizon T , i.e., maxπ J(π) := Eπ[ ∑T−1 t=0 γ
trt] where γ ∈ (0, 1] indicates the discount factor and the expectation is with respect to the randomness in the policy and the transition dynamics. We set the discount factor to 1 due to the assumed bounded time horizon, and we note that although the dynamics of the environment are deterministic, optimizing the reward remains challenging.
Each action leads to the selection of a new key point, with new edges being added once every two actions. The addition of a new edge leads to a revision of the predicted graph and triggers an intermediate reward
rt = sc(Ggt,Gpredt)− sc(Ggt,Gpredt−1), (3)
where sc(Ggt,Gpredt) is a similarity score between the ground truth graph Ggt and the current estimate Gpredt . Discussion of the specific similarity scores used in practice is postponed for Section 3.3. A proper spatial graph generation entails (i) correct topology and (ii) accurate location prediction of individual roads. For the latter, intermediate vertices of degree 2 are essential. We call a road segment (RS), an ordered collection of edges, between vertices of degree d(.) two (or a collection of edges forming a circle):
RS = {(vrs1 ,vrs2), . . . , (vrsk−1 ,vrsk)} s.t (vrsi ,vrsi+1) ∈ E for i = 1, . . . , k − 1 d(vrsi) = 2, for i = 2, . . . k − 1, (d(vrs1) ̸= 2 and d(vrsk) ̸= 2 or vrs1 = vrsk).
During the progression of an episode (i.e. the sequential generation of a graph), the topological nature of the similarity scores in Eq. 3 implies that the effect of each new edge to the reward will be reflected mostly once its whole corresponding road segment has been generated. To resolve the ambiguity in the credit assignment and allow our agent to look ahead into sequences of actions, we rely on Monte Carlo Tree Search (MCTS) to simulate entire sequences of actions. We use a state-of-the-art search-based agent, MuZero (Schrittwieser et al., 2020), that constructs a learnable model of the environment dynamics, simulating transitions in this latent representation and leading to significant computational benefits.
Specifically, MuZero requires three distinct parts (see also Fig. 5):
1. A representation function f that creates a latent vector of the current state ht = fθ(ot). For this step, we use the autoregressive model, as shown in Fig. 4. Our current latent representation ht contains the graph’s hidden state, along with the key points’ embeddings used to map actions to latent vectors. As key points remain the same throughout the episode, image-based features (Components (1) and (2) in Fig. 4) are only computed once.
2. A dynamics network g, we use a simple LSTM (Hochreiter & Schmidhuber, 1997), that predicts the effect of a new action by predicting the next hidden state and the expected reward: (ĥt, r̂t) = gθ(h̃t−1, αt). We can replace h̃t−1 with the latent representation ht−1, or its previous computed approximation ĥt−1 for tree search of larger depth larger than 1.
3. A prediction network ψ, that estimates the policy and the value for the current state (pt+1, vt) = ψθ(h̃t). We compute the policy via a pointer network, as described in Section 3.1. Value estimates are produced by a simple multi-layer network.
The dynamics network guides the search and evaluates the expected reward of actions. For every newly generated edge, we also explicitly inform the network regarding the creation of new intersections and the expected relative change in the overall road surface generated via embeddings (see Fig. 5). By using the dynamics network, we bypass the expensive call to the decoder module during the search, and can instead approximate small modifications in the latent representation directly. For our experiments, the dynamics network requires up to 90 times less floating-point operations to simulate trajectories, compared to using the edge embeddings’ decoder. Effectively, our method does not involve significantly more computation budget compared to the base autoregressive model.
3.3 EVALUATION METRICS
We adopt the same evaluation metrics both as a comparison between different methods but also as the incremental rewards for our agent, by Eq. 3. We use the relaxed versions of precision, recall and intersection over union for pixel-level predictions Correctness/Completeness/Quality (CCQ) (Wiedemann et al., 1998; Wang et al., 2016). As graph-theoretic metrics we use APLS (Van Etten et al., 2018) and additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. More details can be found in Appendix E.1
4 EXPERIMENTS
Implementation details We resize images to 300 × 300 pixels, standardizing according to the training set statistics. For exploration, we initialize workers using Ray (Moritz et al., 2017) that execute episodes in the environment. For training, we unroll the dynamics function for td = 5 steps and use priority weights for the episode according to the differences between predicted and target values. Our algorithm can be considered as an approximate on-policy TD(λ) (Sutton & Barto, 2018) due to the relatively small replay buffer. We reanalyse older games (Schrittwieser et al., 2020) to provide fresher target estimates. Unvisited graph nodes are selected based on an upper confidence score, balancing exploration with exploitation, similar to Silver et al. (2018). We add exploration noise as Dirichlet noise and select actions based on a temperature-controlled sampling procedure, whose temperature is reduced during training.
Given the limited high-quality available ground truth labels (Singh et al., 2018) and to accelerate training, we employ modifications introduced in EfficientZero (Ye et al., 2021). We investigate adding supervision to the environment model and better initialize Q-value predictions similar to the implementation of Elf OpenGo (Tian et al., 2019). We further scale values and rewards using an
invertible transform inspired by Pohlen et al. (2018). Here, we predict support, as fully connected networks are biased towards learning low-frequency representations (Jacot et al., 2018). Selecting new actions involves generating simulations that can be done expeditiously given the small dimension of the latent space and the modest size of the dynamics network. Finally, to generate key points, we skeletonize segmentation masks provided by any baseline segmentation model, by thresholding the respective segmentation masks produced and applying RDP-simplification (Douglas & Peucker, 1973; Ramer, 1972). Selecting an appropriate threshold and subdividing larger edges guarantees that the generated set V ′ adequately captures most of the ground truth road network, leaving the complexity of the problem for our model to handle.
4.1 SYNTHETIC DATASET
We generate a dataset of overhead satellite images of a synthetic town using CityEngine1. We randomly specify vegetation of varying height and width along the side walks of the generated streets, leading inadvertently to occlusions of varying difficulty. The simulated environment allows specifying pixel-perfect masks regarding both roads and trees occluding the road surface based on the provided camera parameters (Kong et al., 2020). We can hence tune the complexity of the task and quantify the benefits of our approach for varying levels of difficulty. We defer more details regarding the generation process and dataset examples to the supplementary material.
We compare our method by training on our dataset a LinkNet model (Chaurasia & Culurciello, 2017), a popular segmentation model that has been widely used in the remote sensing community (Li et al., 2019a). Even in this synthetic and thus less diverse scenario, the deficiency of segmentation models to rely mostly on local information, with no explicit ability for longer-range interactions, is evident. Fig. 6, illustrates examples of such over-segmented predictions and how our approach can improve on them. We also define a ’difficulty’ attribute per synthetic satellite image, quantifying the occlusions as a percentage of the ground truth road mask covered. We observe a considerable absolute improvement in topological metric scores when training our model on this synthetic dataset, compared to the LinkNet baseline, for varying image difficulty.
4.2 REAL DATASETS
We evaluate our method on the SpaceNet and DeepGlobe datasets. We use the same train-test splits as in Batra et al. (2019) to promote reproducibility, while results are reported for the final combined graph on the original image scale. No pre-training on the synthetic dataset takes place. Further details regarding pre-processing are available in the Appendix E.2
1https://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
4.2.1 COMPARISON TO BASELINES
We first verify that under the effectiveness of the proposed approach under an ideal scenario where the key points conditioned upon, correspond to the ones from the ground truth. In the interest of space we point the reader to Appendix A and Table 3. Subsequently, we move to the primary task of predicting spatial graphs without the ground-truth graph information but extract key points via the aforementioned process and train using the described topological metrics directly. The previous baselines are not applicable in this case, due to lack of ground truth information, so we instead compare against the following; we explore powerful CNN architectures, by training a Segmentation model with a ResNet backbone. We evaluate DeepRoadMapper (Máttyus et al., 2017), a model that refines previous segmentation by adding connections along possible identified paths. As done by Batra et al. (2019) we notice that in complex scenarios, the effect of this post-processing step is not always positive. We also evaluate against LinkNet (Chaurasia & Culurciello, 2017), and Orientation (Batra et al., 2019), which is trained to predict simultaneously road surfaces and orientation.
Quantitative results in Table 1 and visual inspection in Table 2, affirm that the global context and the gradual generation incite a better understanding of the scene, leading to consistently outperforming topological metric results compared to the baselines. We remark that our predictions are more topologically consistent with fewer shortcomings, such as double roads, fragmented roads, and overconnections. This is further supplemented by comparing the statistics of the predicted spatial graphs in Fig. 7. We further showcase the transferability of our model by employing it with no fine-tuning (apart from dataset-specific image normalization) on the DeepGlobe dataset. We can refine previous predictions by adding missing edges, leading to more accurate spatial graph predictions, as shown in Table 1. This confirms our conjecture that road structures and geometric patterns are repeated across diverse cities’ locations.
4.2.2 ABLATION STUDY
We experimented attending to image features for the two transformer modules by extracting perpatch visual features from the conditioning image H img = [himg1 ,h img 2 , . . . ], as done in the Vision Transformer (Dosovitskiy et al., 2020). This did not lead to significant improvements, which we attribute to over-fitting. In Fig. 8 we highlight the relative importance of some additional components for the final predictions. As efficiency is also of particular importance to us, we further visualize the effect of varying the simulation depth of the dynamics network during training. Surprisingly
perhaps, our method performs consistently better than baselines, even for a small overall simulation length, as this already enables better policy approximations.
In Appendix A we provide incremental results for the task of predicting road networks based on a optimal set of key points. In Appendix B we provide insights concerning interpretability and further comparison to baselines based on the varying difficulty of the predicted underlying road networks. In Appendix C we give more information regarding the generation of the synthetic dataset, while in Appendix D more information regarding the model architecture. Final in Appendix E we provide more implementation decisions, including details on exactly how key points and generated and how individual patch-level predictions are fused together. More examples of full environment trajectories are given in Appendix F. We stress that our method can act on partially initialized predictions, registering it also as a practical refinement approach on top of any baseline. Initializing our model according to the ARM model allows a moderately quick fine-tuning phrase. In combination with the learned environment model, which circumvents expensive calls to the edge embedding model for each simulation step in the MCTS, allows us to train even on a single GPU.
5 CONCLUSIONS
We presented a novel reinforcement learning framework for generating a graph as a variable-length edge sequence, where a structured-aware decoder selects new edges by simulating action sequences into the future. Importantly, this allows the model to better capture the geometry of the targeted objects. This is confirmed by our experimental results, since our approach consistently produces more faithful graph statistics. One advantage of the proposed method is that the reward function is based on (non-continuous) metrics that are directly connected to the application in question. Our approach does not require significantly more computational resources compared to state-of-the-art supervised approaches, while in addition, it can be used to refine predictions from another given model. We also remark that the direct prediction of a graph enables the concurrent prediction of meta-information about the edges, including, for instance, the type of road (highway, primary or secondary street, biking lane, etc).
Our approach opens the door to several directions for future work. For example, we have assumed that a pre-defined model gives the location of key points, but one could instead augment the action space to propose new key points’ locations. Other promising directions include the direct prediction of input-dependent graph primitives, e.g. T-junctions or roundabouts. Finally, we emphasize that our approach is suitable to a wide variety of applications where autoregressive models are typically used, and it is of special interest when there is a need for complex interactions or constraints between different parts of an object.
6 REPRODUCIBILITY STATEMENT
We have taken multiple steps to ensure reproducibility of the experiments. We refer the reader to Appendix E for a complete description of the training protocol. We have also released the code as part of the supplementary material, including scripts on how to reproduce our results.
A MORE EXPERIMENTS
We first assess the performance of our proposed method in an ideal scenario where the key points, correspond to the ones from the ground truth. To hinder training and inference, we insert additional key points as (1) random intermediate points between known edges and (2) randomly sampled locations in the images. Here, our assumption in Section 3 that the set V ′ suffices to generate the ground truth graph, holds by construction. We compare our method against several baselines that learn to connect edges between key points, using the same feature extraction pipeline, described in Section 3.1, as our model. Cls is a classification network that predicts for all pairs of key points a value {0, 1} corresponding to the existence of an edge. GCN implements a graph neural network that predicts directly the adjacency matrix. We also present an autoregressive version of our model ARM, that is trained with cross-entropy loss to predict the pre-defined ordered sequence of key points. We use this model to initialize ours. Results are presented in Table 3.
As expected, the ARM model achieves a low perplexity score when evaluated against the corresponding sequence, ordered according to the autoregressive order, but suffers in predicting the edges when in random order. The ARM underperforms because of frequent early terminations and the implicit inability to revisit key points, what the desired final metric is concerned, here APLS. Even though our model is developed upon this autoregressive model, it generates tokens in an arbitrary arrangement. Reward and value estimates enable a different training scheme that deeply correlates with the desired objective.
B INTERPRETABILITY
We visualize attention (of the Transformer II module), using the attention flow proposed in Abnar & Zuidema (2020), in Fig. 9. To create attention scores per edge, we aggregate scores for the pair of tokens that define each edge. New predictions lay increased attention to already generated junctions, parallel road segments, and other edges belonging to the same road segment.
We also compare APLS results achieved by varying the difficulty of the ground truth images in terms of the total number of junctions (vertices with a degree greater than 2) and in terms of the average length of road segments that are present, in Fig. 10. Our method explicitly captures information re-
garding the degree of the key points during the search, while it can encode better global information, even across larger distances. It is not a surprise perhaps then, that it outperforms the baselines more convincingly as the difficulty of the ground truth road network increases.
Finally, we visualize an example of an imagined rollout trajectory at a single step of our algorithm in Fig. 11. During a single inference step, our method uses tree search to look ahead into sequences of actions in the future. For our example, we have chosen a relatively smaller number of simulations (10) for better visual inspection. We also show the corresponding environment states reached, which are, however, not explicitly available to the model, as it is searching and planning using a learned model of the environment.
C DATASET CREATION
We use CityEngine, a 3D modelling software for creating immersive urban environments. We generate a simple road network and apply a rural city texture on the created city blocks, provided by Kong et al. (2020). We then uniformly generate trees of varying height and size along the side walks of the generated streets. We then iteratively scan the generated city by passing a camera of specific orientation and height. We repeat the same process after suitable modifications to the texture, for the generation of the street masks, as well as the vegetation masks, that correspond to only the plants along the side walks. Some examples of the generated images are provided in Fig. 12. We note that additional occlusion can be caused by the relation of the camera with the 3D meshes corresponding to buildings. These occlusions are, however, not captured by our generated masks, and we can expect them to contribute partially to the fragmented segmentation results.
We train a segmentation-based model, LinkNet, as our baseline. We rasterize the ground truth graph to create pixel-level labels and train by maximizing the intersection over union, which is commonly done in practice. We note that there is a tradeoff between the nature of the predictions and the choice of the line-width with which the ground truth graph is rasterized. A large width achieves better results in terms of connectivity of the predicted graph but results in poorer accuracy in the final key points’ locations. Furthermore, when providing a large width, areas in the image with more uncertainty, e.g. vegetation that is not above a road segment, are also predicted as road networks with high certainty, leading to spurious, disconnected road segments. To highlight the advantages of our method compared to this baseline and in order to promote more meaningful predictions, we select a relatively smaller width.
D ARCHITECTURE DETAILS
As an image backbone model, we use a ResNet-18 for the synthetic dataset and a ResNet-50 for the real dataset experiments. We extract features at four different scales, after each of the 4 ResNet layers. To extract features for each key point, we interpolate the backbone feature maps based on the key points’ locations. We use different learned embeddings based on the actual key points’ locations. For the key points embedding model, we use a transformer encoder with 16 self-attention layers and a dropout rate of 0.15. We use layer normalization and GELU activation functions.
For the edge-embeddings model, we use the respective key points embedding, along with learned position and type embeddings, which we all sum together. As aforementioned, we can initialize the current edge sequence based on previous predictions, allowing our model to refine any initial prediction provided. Again, we use the same transformer architecture with 16 self-attention layers, and a dropout rate of 0.15.
Finally, the architecture of the dynamics network and the value prediction network are shown in Fig. 13. For the value estimation, we also provide the current environment step, as we execute steps in an environment with a bounded time horizon.
E IMPLEMENTATION DETAILS
E.1 EVALUATION METRICS
APLS (Van Etten et al., 2018) constitutes a graph theoretic metric that faithfully describes routing properties. APLS is defined as
APLS = 1− 1 Np ∑ pv1v2<∞ min { 1, |pv1v2 − pv1′v′2 | pv1v2 } , (4)
where v and v′ denote a source node and its closest point on the predicted graph if such exists within a buffer. Np denotes the number of paths sampled and pv1v2 the length of the shortest path between two nodes. Similarly, the Too Long Too Short (TLTS) metric (Wegner et al., 2013) compares lengths of the shortest paths between randomly chosen points of the two graphs, classifying them as infeasible, correct, or too-long or too-short (2l+2s) if the length of the path on the predicted graph does not differ by more than a threshold (5%) compared to the ground truth path. Since small perturbations to the predicted graph can have larger implications to pixel-level predictions, the definitions of precision, recall and intersection over union were relaxed in Wiedemann et al. (1998); Wang et al. (2016) leading to the metrics Correctness/Completeness/Quality (CCQ).
Still, some types of errors, such as double roads or over-connections, are not penalized from the above metrics (Citraro et al., 2020). We therefore additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. For the final similarity score used in Eq. 3, we use a linear combination of the aforementioned metrics, more details are available in the supplementary material.
E.2 DATASET INFORMATION
We use the following datasets to train our models, i.e. baselines and our newly proposed RL agent.
SpaceNet (Van Etten et al., 2018) includes a road network of over 8000 Km over four different cities: Vegas, Paris, Shanghai, and Khartoum, where the complexity and quality, and regularity of the road network depend on the city of origin. Satellite images are provided at a pixel resolution of 1300 × 1300, corresponding to a ground resolution of 30cm per pixel. We split the 2780 total images into crops of size 400×400 with an overlap of 100 pixels for training. To better highlight the diversity of the satellite images from these four different locations, we have included some randomly sampled examples in Fig. 14.
DeepGlobe (Demir et al., 2018) contains satellite images from 3 different locations with pixellevel annotations. Images have a resolution of 1024 × 1024, with a ground resolution of 50cm per pixel. We crop the 6226 images into tiles, leading to a similar ground truth resolution per pixel compared to SpaceNet.
E.3 TRAINING DETAILS
At each MCTS search step, we perform several simulations from the root state s0 for a number of steps k = 1, . . . and select an action that maximizes the upper confidence bound (Silver et al., 2018),
ak = argmax a
[ Q(s, a) + P (s, a) √∑ b N(s, b)
1 +N(s, a)
( c1 + log (∑ b N(s, b) + c2 + 1
c2
))] ,
where N(s, a), Q(s, a), P (s, a) corresponds to the visit counts, mean values and policies, as calculated by the current search statistics. Constants c1, c2 balance exploration and exploitation. Based on a state sk−1 and a selected action ak, a new state sk and reward r̂k are estimated through the dynamics network. We update the mean values based on bootstrapped values of the estimated value functions and rewards. We experimented with training the reward and value support predictions with both mean squared error (MSE) and cross-entropy loss. We opted for MSE because of its stability. For a more in depth description of the training scheme of MuZero we recommend Schrittwieser et al. (2020) and Ye et al. (2021).
As hinted in the main text, we train using intermediate rewards, a linear combination of topological metrics. We experimented using a variety of different scores and metrics, but ended up using APLS, Path-based f1, Junction-based f1 and Sub-graph-based f1 at a relative scale of (0.35, 0.25, 0.25, 0.15). We found the Sub-graph-based f1 to be more sensitive to small perturbations and therefore weighted it less in the final combination. The metrics mentioned above are highly correlated, as examined in Batra et al. (2019). This correlation, though, holds when comparing the final predictions. Intermediate incremental rewards are more independent, so we still found it useful to use a mixture of them. Initially, to let our network learn basic stable rewards, we use the segmentation prediction mask as target. That means that we train our model to predict the graph that can be extracted after post-processing the segmentation model’s prediction.
After pre-training the autoregressive model, we experimented with fine-tuning using RL with two different learning rates, where a slower by a factor (0 − 1] rate was chosen for the pre-trained modules. Here, we noticed that the model still performed better than the ARM baseline. As it has trouble though to escape the autoregressive order, compared to the single learning rate model, results are less optimal.
We finally note that by avoiding type and position encoding in the Transformer II module, we can ensure the embedded graph is permutation invariant regarding the sequence of edges and the order of key points within an edge. Our search graph can then be formulated into a directed acyclic graph, circumventing unnecessary division of the search space (Browne et al., 2012; Childs et al., 2008), enabling more efficient sampling (Saffidine et al., 2012). These updated search statistics are cumbersome to compute, though, and we found no significant efficiency improvement. They do, however, confirm our model’s potential ability to handle the input graph as an unordered set, as the problem suggests.
E.4 PRODUCING KEY POINTS
We initially train a segmentation model for predicting pixel-level accurate masks of the road network. For this step, we can use any model from the literature. We extract the predicted graph by
skeletonizing the predicted mask and simplifying the graph by a smoothing threshold. We then sample intermediate vertices along the largest in terms of ground length edges, to enlarge the action space. We illustrate a toy example of such a process in Fig. 15. To accelerate inference, we can also initialize our prediction graph based on the provided segmentation mask. In such a case, our method closer resembles previous refinement approaches. We additionally remove edges of connected components with small overall size and edges belonging to roads segments leading to dead ends (that means vertices of degree one), keeping though the corresponding key points in the environment state. Thus, if our model deems the existence of the respective edges necessary, it can add them once more. We plan to further investigate augmenting the action space with the ability to remove edges in future work, that would not require such a pre-processing strategy.
E.5 COMBINING PREDICTIONS
When creating the final per image prediction, we initially simply generated predictions on nonoverlapping patches and fused them together. To overcome small pixel location differences in the predicted graphs, we fuse by rasterizing the individual graphs in the pixel domain with a line width larger than 1. What we found more successful was to perform inference on overlapping patches and to initialize the currently predicted graph based on the predictions made so far. This is particularly useful, as road segments are often close to the boundaries of our cropped image. Individual inference and simple fuse can often lead to over-connected predictions. We visualize a toy example of such a process in Fig. 16.
Inference Simple Fusion
For the segmentation baselines, unless specified in their respective documentation, we perform inference by cropping images to overlapping patches and normalizing the final predicted mask based on the number of overlapping predictions per pixel location. We also pad images around their boundary, as done in Acuna et al. (2019). We note some small differences in the final scores for the Orientation model (Batra et al., 2019) and the SpaceNet dataset, compared to the ones in Citraro et al. (2020). We assume these are an outcome of different chosen parameters for the calculation of metrics. We keep these parameters fixed when calculating scores for all methods.
E.6 MORE COMPARISONS WITH BASELINES
We elaborate more on the evaluation method on Sat2Graph. The authors provided predictions corresponding only to a center crop of the original SpaceNet dataset images. For each 400× 400 pixel image, predictions are made for the center 352 × 352 area of the image. One could expect slightly better results if trained in the same conditions but that the gap does still seem large enough to show the merits of our approach.
Other baselines like Neural turtle graphics (Chu et al., 2019) and Topological Map Extraction (Li et al., 2019b) do not have an implementation available. We do not compare against VecRoad (Tan et al., 2020) or RoadTracer (Bastani et al., 2018), as different datasets were used for the current evaluations. These baselines have been already shown to underperform though in the literature, by methods that we are comparing against.
F MORE EXAMPLES
We showcase in Fig. 17 and Fig. 18 more examples of the environment state progression, for the synthetic dataset. | 1. What is the focus of the paper regarding road network prediction?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its usefulness and real-world application?
3. Are there any concerns or limitations regarding the method's ability to handle informal and variable roads in remote areas?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a road network prediction method using spatial graphs. The approach uses reinforcement learning.
A number of graph measures are used to assess the method.
Strengths And Weaknesses
The paper is well-written and easy to follow. The literature is covered well. The method is very useful for road network extraction and presents a solution to a real-world problem. The results are strong with additional results provided in the appendix.
The placement of figures is out of place and I don't see all figures referred to in the text. The text doesn't often tell the story alongside the figures.
In remote areas, the target of this work, roads are often informal and exhibit a lot of variability e.g. width, surface, and use. Has this been considered? A centerline wouldn't work in such cases.
Clarity, Quality, Novelty And Reproducibility
The approach shows as novel and the results are good. I view high novelty.
The code and data are referred to in the text. The work should be reproducible. |
ICLR | Title
Mastering Spatial Graph Prediction of Road Networks
Abstract
Accurately predicting road networks from satellite images requires a global understanding of the network topology. We propose to capture such high-level information by introducing a graph-based framework that simulates the addition of sequences of graph edges using a reinforcement learning (RL) approach. In particular, given a partially generated graph associated with a satellite image, an RL agent nominates modifications that maximize a cumulative reward. As opposed to standard supervised techniques that tend to be more restricted to commonly used surrogate losses, these rewards can be based on various complex, potentially noncontinuous, metrics of interest. This yields more power and flexibility to encode problem-dependent knowledge. Empirical results on several benchmark datasets demonstrate enhanced performance and increased high-level reasoning about the graph topology when using a tree-based search. We further highlight the superiority of our approach under substantial occlusions by introducing a new synthetic benchmark dataset for this task.
1 INTRODUCTION
Road layout modelling from satellite images constitutes an important task of remote sensing, with numerous applications in and navigation. The vast amounts of data available from the commercialization of geospatial data, in addition to the need for accurately establishing the connectivity of roads in remote areas, have led to an increased interest in the precise representation of existing road networks. By nature, these applications require structured data types that provide efficient representations to encode geometry, in this case, graphs, a de facto choice in domains such as computer graphics, virtual reality, gaming, and the film industry. These structured-graph representations are also commonly used to label recent road network datasets (Van Etten et al., 2018) and map repositories (OpenStreetMap contributors, 2017). Based on these observations, we propose a new method for generating predictions directly as spatial graphs, allowing us to explicitly incorporate geometric constraints in the learning process, encouraging predictions that better capture higher-level dataset statistics.
In contrast, existing methods for road layout detection, mostly rely on pixel-based segmentation models that are trained on masks produced by rasterizing ground truth graphs. Performing pixelwise segmentation, though, ignores structural features and geometric constraints inherent to the
problem. As a result, minimum differences in the pixel-level output domain can have significant consequences in the proposed graph, in terms of connectivity and path distances, as manifested by the often fragmented outputs obtained after running inference on these models. In order to address these significant drawbacks, we propose a new paradigm where we: (i) directly generate outputs as spatial graphs and (ii) formalize the problem as a game where we sequentially construct the output by adding edges between key points. These key points can in principle come from any off-the-shelf detector that identifies road pieces with sufficient accuracy. Our generation process avoids having to resort to cumbersome post-processing steps (Batra et al., 2019; Montoya-Zegarra et al., 2015) or optimize some surrogate objectives (Máttyus & Urtasun, 2018; Mosinska et al., 2018) whose relation to the desired qualities of the final prediction is disputed. Concurrently, the sequential decision-making strategy we propose enables us to focus interactively on different parts of the image, introducing the notion of a current state and producing reward estimates for a succession of actions. In essence, our method can be considered as a generalization of previous refinement techniques (Batra et al., 2019; Li et al., 2019b) with three major advantages: (i) removal of the requirement for greedy decoding, (ii) ability to attend globally to the current prediction and selectively target parts of the image, and (iii) capacity to train based on demanding task-specific metrics.
More precisely, our contributions are the following:
• We propose a novel generic strategy for training and inference in autoregressive models that removes the requirement of decoding according to a pre-defined order and refines initial sampling probabilities via a tree search.
• We create a new synthetic benchmark dataset of pixel-level accurate labels of overhead satellite images for the task of road network extraction. This gives us the ability to simulate complex scenarios with occluded regions, allowing us to demonstrate the improved robustness of our approach. We plan to release this dataset publicly.
• We confirm the wide applicability of our approach by improving the performance of existing methods on the popular SpaceNet and DeepGlobe datasets.
2 RELATED WORK
Initial attempts to extract road networks mainly revolved around handcrafted features and stochastic geometric models of roads (Barzohar & Cooper, 1996). Road layouts have specific characteristics, regarding radiometry and topology e.g. particular junction distribution, certain general orientation, and curvature (see Fig. 2), that enable their detection even in cases with significant occlusion and uncertainty (Hinz & Baumgartner, 2003). Modern approaches mostly formulate the road extraction task as a segmentation prediction task (Lian et al., 2020; Mattyus et al., 2015; Audebert et al., 2017) by applying models such as Hourglass (Newell et al., 2016) or LinkNet (Chaurasia & Culurciello, 2017). This interpretation has significant drawbacks when evaluated against structural losses, because of discontinuities in the predicted masks. Such shortcomings have been addressed by applying some additional post-processing steps, such as high-order conditional random fields (Niemeyer et al., 2011; Wegner et al., 2013) or by training additional models that refine these initial predictions (Máttyus et al., 2017; Batra et al., 2019). Other common techniques include the optimization of an ensemble of losses. Chu et al. (2019) rely on a directional loss and use non-maximal suppression as a thinning layer, while Batra et al. (2019) calculate orientations of road segments. Although such auxiliary losses somewhat improve the output consistency, the fundamental issue of producing
predictions in the pixel space persists. It remains impossible to overcome naturally occurring road network structures, e.g. crossings of roads in different elevations, see Fig. 3.
Previous failure cases have led to more intuitive conceptualizations of the task. Roadtracer (Bastani et al., 2018), iteratively builds a road network, similar to a depth-first search approach, while Chu et al. (2019) learn a generative model for road layouts and then apply it as a prior on top of a segmentation prediction mask. Proposed graph-based approaches, encode the road network directly as a graph, but either operate based on a constrained step-size (Tan et al., 2020) to generate new vertices or operate on a single step (He et al., 2020; Bandara et al., 2022), involving use-defined thresholding to post-process the final predictions. Most similar to our work, Li et al. (2019b) predict locations of key points and define a specific order traversing them, also similarly Xu et al. (2022). Such autoregressive models have been recently successfully applied with the use of transformers (Vaswani et al., 2017) in a range of applications (Nash et al., 2020; Para et al., 2021a;b; Xu et al., 2022) to model constraints between elements, while their supervised training explicitly requires tokens to be processed in a specific order. This specific order combined with the fact that only a surrogate training objective is used, introduces limitations, discussed further in the next section. In order to eliminate this order requirement and to optimize based on the desired metric, while attending globally to the currently generated graph, we propose to use RL as a suitable alternative.
When generating discrete outputs, an unordered set of edges (Zaheer et al., 2017), it is challenging to adapt existing learning frameworks to train generative models (Para et al., 2021b). Instead of optimizing in the image space, however, we are interested in optimizing spatial structured losses by learning program heuristics, i.e. policies. RL has found success in the past in computer vision applications (Le et al., 2021), but mainly as an auxiliary unit with the goal of improving efficiency (Xu et al., 2021) or as a fine-tuning step (Qin et al., 2018). We instead rely on RL to produce the entire graph exploiting the ability of the framework for more high-level reasoning.
3 METHODOLOGY
We parametrize a road network as a graph G = {V, E} with each vertex vi = [xi, yi]⊤ ∈ V representing a key point on the road surface. The set of edges (vi, vj) ∈ E , corresponds to road segments connecting these key points. We can then generate a probability distribution over roads by following a two-step process: i) generation of a set of vertices and ii) generation of a set of edges connecting them. Formally, for an image I, a road network R is derived as:
R = argmax V,E P (V, E | I) = P (E | V, I)P (V | I). (1)
The graph nodes typically correspond to local information in an image, and we therefore resort to a CNN-based model to extract key points, providing the set V ′, that sufficiently captures the information in the ground truth graph G. The construction of edges, however, requires higher-level reasoning that can cope with parallel roads, junctions, occlusions, or poor image resolution, among other difficulties.
Considering probabilistic models over sequences and using the chain rule, we can factorize the joint distribution as the product of a series of conditional distributions
P (E | V, I;σ) = NE∏ n=1 P (eσ(n) | e<σ(n),V, I), (2)
where e<σ(n) represents eσ(1), eσ(2), . . . , eσ(n−1) and σ ∈ SNE denotes the set of all permutations of the integers 1, 2, . . . , NE , with NE the number of edges. For our work, we consider the setting where these sequences are upper bounded in length, i.e. NE ≤ Nmax, a reasonable assumption when dealing with satellite images of fixed size. Autoregressive models (ARMs) have been used to solve similar tasks in the past by defining a fixed order of decoding (Oord et al., 2016; van den Oord et al., 2016; Nash et al., 2020; Para et al., 2021a). In our case, this would correspond to sorting all key points by their x and y locations and generating edges for each of them consecutively. We call this the autoregressive order. There are, however, two major drawbacks.
First, the evaluation metrics used for this task define a buffer region in which nodes in the ground truth and the predicted graph are considered to be a match. Therefore, a newly generated edge can be only partially correct, when only partially overlapping with the ground truth graph. This nonsmooth feedback comes in clear contrast to the supervised training scheme of ARMs, minimization of the negative log-likelihood, that assumes perfect information regarding the key points’ locations, i.e. that the sets V and V ′ are the same. In practice, this condition is rarely met, as the exact spatial graph can be represented in arbitrarily many ways by subdividing long edges into smaller ones or due to small perturbation to key points’ locations. It is thus imperative that our model can estimate the expected improvement of adding selected edges, which implicitly can also signal when to appropriately end the generation process.
Second, the requirement to decode according to the autoregressive order introduces a bias and limits the expressiveness of the model (Uria et al., 2014). As a result, it can lead to failures in cases with blurry inputs or occlusions (Li et al., 2019b). Previous solutions include the use of beam search, either deterministic or stochastic (Meister et al., 2021). Beam search does not however eliminate the bias introduced in the selection order of the key points, while suffering from other deficiencies, such as degenerate repetitions (Holtzman et al., 2019; Fan et al., 2018). In order to address these shortcomings, we advocate for a permutation invariant strategy. We present a novel generic strategy, which improves autoregressive models without requiring significantly more computational cost.
3.1 AUTOREGRESSIVE MODEL
We start by introducing a base autoregressive model, illustrated in Fig. 4. Given an image and a set of key points, our model produces a graph by sequentially predicting a list of indices, corresponding to the graph’s flattened, unweighted edge-list. Each forward pass produces probabilities over the set of key points, which leads to a new action after sampling. A successive pair of indices defines an edge as its two endpoints. A special end-of-sequence token is reserved to designate the end of the generation process.
Following Wang et al. (2018); Smith et al. (2019), we begin by extracting visual features per key point, by interpolating intermediate layers of a ResNet backbone to the key points’ locations, which
are further augmented by position encodings of their locations. We then further process these features using two lightweight Transformer modules. The first transformer (Transformer I in Fig. 4) encodes the features of the key points as embeddings. The second transformer (Transformer II in Fig. 4) takes as input the currently generated edge list sequence, corresponding to the currently partially generated graph. Edges are directly mapped to the embeddings of their comprising key points, supplemented by position and type embeddings, to differentiate between them, as shown in Fig. 5 (a). An additional global image embedding, also extracted by the ResNet, is used to initialize the sequence. The Transformer II module produces a single hidden state, which is linked with theNV′ +1 (corresponding to the provided key points, supplemented by the special end of the generation token) key points’ embeddings by a pointer network (Vinyals et al., 2015), via a dot-product to generate the final distribution. This allows a variable number of actions that depends on the current environment state, instead of using a fixed action space.
3.2 AUGMENTED SEARCH
In order to address the problems of greedy decoding (analysed Section 3), we frame our road extraction task as a classical Markov-decision process (MDP). The generation of a graph for every image defines an environment, where the length of the currently generated edge list determines the current step. Let ot, αt and rt correspond to the observation, the action and the observed reward respectively, at time step t. The aim is to search for a policy that maximizes the expected cumulative reward over a horizon T , i.e., maxπ J(π) := Eπ[ ∑T−1 t=0 γ
trt] where γ ∈ (0, 1] indicates the discount factor and the expectation is with respect to the randomness in the policy and the transition dynamics. We set the discount factor to 1 due to the assumed bounded time horizon, and we note that although the dynamics of the environment are deterministic, optimizing the reward remains challenging.
Each action leads to the selection of a new key point, with new edges being added once every two actions. The addition of a new edge leads to a revision of the predicted graph and triggers an intermediate reward
rt = sc(Ggt,Gpredt)− sc(Ggt,Gpredt−1), (3)
where sc(Ggt,Gpredt) is a similarity score between the ground truth graph Ggt and the current estimate Gpredt . Discussion of the specific similarity scores used in practice is postponed for Section 3.3. A proper spatial graph generation entails (i) correct topology and (ii) accurate location prediction of individual roads. For the latter, intermediate vertices of degree 2 are essential. We call a road segment (RS), an ordered collection of edges, between vertices of degree d(.) two (or a collection of edges forming a circle):
RS = {(vrs1 ,vrs2), . . . , (vrsk−1 ,vrsk)} s.t (vrsi ,vrsi+1) ∈ E for i = 1, . . . , k − 1 d(vrsi) = 2, for i = 2, . . . k − 1, (d(vrs1) ̸= 2 and d(vrsk) ̸= 2 or vrs1 = vrsk).
During the progression of an episode (i.e. the sequential generation of a graph), the topological nature of the similarity scores in Eq. 3 implies that the effect of each new edge to the reward will be reflected mostly once its whole corresponding road segment has been generated. To resolve the ambiguity in the credit assignment and allow our agent to look ahead into sequences of actions, we rely on Monte Carlo Tree Search (MCTS) to simulate entire sequences of actions. We use a state-of-the-art search-based agent, MuZero (Schrittwieser et al., 2020), that constructs a learnable model of the environment dynamics, simulating transitions in this latent representation and leading to significant computational benefits.
Specifically, MuZero requires three distinct parts (see also Fig. 5):
1. A representation function f that creates a latent vector of the current state ht = fθ(ot). For this step, we use the autoregressive model, as shown in Fig. 4. Our current latent representation ht contains the graph’s hidden state, along with the key points’ embeddings used to map actions to latent vectors. As key points remain the same throughout the episode, image-based features (Components (1) and (2) in Fig. 4) are only computed once.
2. A dynamics network g, we use a simple LSTM (Hochreiter & Schmidhuber, 1997), that predicts the effect of a new action by predicting the next hidden state and the expected reward: (ĥt, r̂t) = gθ(h̃t−1, αt). We can replace h̃t−1 with the latent representation ht−1, or its previous computed approximation ĥt−1 for tree search of larger depth larger than 1.
3. A prediction network ψ, that estimates the policy and the value for the current state (pt+1, vt) = ψθ(h̃t). We compute the policy via a pointer network, as described in Section 3.1. Value estimates are produced by a simple multi-layer network.
The dynamics network guides the search and evaluates the expected reward of actions. For every newly generated edge, we also explicitly inform the network regarding the creation of new intersections and the expected relative change in the overall road surface generated via embeddings (see Fig. 5). By using the dynamics network, we bypass the expensive call to the decoder module during the search, and can instead approximate small modifications in the latent representation directly. For our experiments, the dynamics network requires up to 90 times less floating-point operations to simulate trajectories, compared to using the edge embeddings’ decoder. Effectively, our method does not involve significantly more computation budget compared to the base autoregressive model.
3.3 EVALUATION METRICS
We adopt the same evaluation metrics both as a comparison between different methods but also as the incremental rewards for our agent, by Eq. 3. We use the relaxed versions of precision, recall and intersection over union for pixel-level predictions Correctness/Completeness/Quality (CCQ) (Wiedemann et al., 1998; Wang et al., 2016). As graph-theoretic metrics we use APLS (Van Etten et al., 2018) and additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. More details can be found in Appendix E.1
4 EXPERIMENTS
Implementation details We resize images to 300 × 300 pixels, standardizing according to the training set statistics. For exploration, we initialize workers using Ray (Moritz et al., 2017) that execute episodes in the environment. For training, we unroll the dynamics function for td = 5 steps and use priority weights for the episode according to the differences between predicted and target values. Our algorithm can be considered as an approximate on-policy TD(λ) (Sutton & Barto, 2018) due to the relatively small replay buffer. We reanalyse older games (Schrittwieser et al., 2020) to provide fresher target estimates. Unvisited graph nodes are selected based on an upper confidence score, balancing exploration with exploitation, similar to Silver et al. (2018). We add exploration noise as Dirichlet noise and select actions based on a temperature-controlled sampling procedure, whose temperature is reduced during training.
Given the limited high-quality available ground truth labels (Singh et al., 2018) and to accelerate training, we employ modifications introduced in EfficientZero (Ye et al., 2021). We investigate adding supervision to the environment model and better initialize Q-value predictions similar to the implementation of Elf OpenGo (Tian et al., 2019). We further scale values and rewards using an
invertible transform inspired by Pohlen et al. (2018). Here, we predict support, as fully connected networks are biased towards learning low-frequency representations (Jacot et al., 2018). Selecting new actions involves generating simulations that can be done expeditiously given the small dimension of the latent space and the modest size of the dynamics network. Finally, to generate key points, we skeletonize segmentation masks provided by any baseline segmentation model, by thresholding the respective segmentation masks produced and applying RDP-simplification (Douglas & Peucker, 1973; Ramer, 1972). Selecting an appropriate threshold and subdividing larger edges guarantees that the generated set V ′ adequately captures most of the ground truth road network, leaving the complexity of the problem for our model to handle.
4.1 SYNTHETIC DATASET
We generate a dataset of overhead satellite images of a synthetic town using CityEngine1. We randomly specify vegetation of varying height and width along the side walks of the generated streets, leading inadvertently to occlusions of varying difficulty. The simulated environment allows specifying pixel-perfect masks regarding both roads and trees occluding the road surface based on the provided camera parameters (Kong et al., 2020). We can hence tune the complexity of the task and quantify the benefits of our approach for varying levels of difficulty. We defer more details regarding the generation process and dataset examples to the supplementary material.
We compare our method by training on our dataset a LinkNet model (Chaurasia & Culurciello, 2017), a popular segmentation model that has been widely used in the remote sensing community (Li et al., 2019a). Even in this synthetic and thus less diverse scenario, the deficiency of segmentation models to rely mostly on local information, with no explicit ability for longer-range interactions, is evident. Fig. 6, illustrates examples of such over-segmented predictions and how our approach can improve on them. We also define a ’difficulty’ attribute per synthetic satellite image, quantifying the occlusions as a percentage of the ground truth road mask covered. We observe a considerable absolute improvement in topological metric scores when training our model on this synthetic dataset, compared to the LinkNet baseline, for varying image difficulty.
4.2 REAL DATASETS
We evaluate our method on the SpaceNet and DeepGlobe datasets. We use the same train-test splits as in Batra et al. (2019) to promote reproducibility, while results are reported for the final combined graph on the original image scale. No pre-training on the synthetic dataset takes place. Further details regarding pre-processing are available in the Appendix E.2
1https://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
4.2.1 COMPARISON TO BASELINES
We first verify that under the effectiveness of the proposed approach under an ideal scenario where the key points conditioned upon, correspond to the ones from the ground truth. In the interest of space we point the reader to Appendix A and Table 3. Subsequently, we move to the primary task of predicting spatial graphs without the ground-truth graph information but extract key points via the aforementioned process and train using the described topological metrics directly. The previous baselines are not applicable in this case, due to lack of ground truth information, so we instead compare against the following; we explore powerful CNN architectures, by training a Segmentation model with a ResNet backbone. We evaluate DeepRoadMapper (Máttyus et al., 2017), a model that refines previous segmentation by adding connections along possible identified paths. As done by Batra et al. (2019) we notice that in complex scenarios, the effect of this post-processing step is not always positive. We also evaluate against LinkNet (Chaurasia & Culurciello, 2017), and Orientation (Batra et al., 2019), which is trained to predict simultaneously road surfaces and orientation.
Quantitative results in Table 1 and visual inspection in Table 2, affirm that the global context and the gradual generation incite a better understanding of the scene, leading to consistently outperforming topological metric results compared to the baselines. We remark that our predictions are more topologically consistent with fewer shortcomings, such as double roads, fragmented roads, and overconnections. This is further supplemented by comparing the statistics of the predicted spatial graphs in Fig. 7. We further showcase the transferability of our model by employing it with no fine-tuning (apart from dataset-specific image normalization) on the DeepGlobe dataset. We can refine previous predictions by adding missing edges, leading to more accurate spatial graph predictions, as shown in Table 1. This confirms our conjecture that road structures and geometric patterns are repeated across diverse cities’ locations.
4.2.2 ABLATION STUDY
We experimented attending to image features for the two transformer modules by extracting perpatch visual features from the conditioning image H img = [himg1 ,h img 2 , . . . ], as done in the Vision Transformer (Dosovitskiy et al., 2020). This did not lead to significant improvements, which we attribute to over-fitting. In Fig. 8 we highlight the relative importance of some additional components for the final predictions. As efficiency is also of particular importance to us, we further visualize the effect of varying the simulation depth of the dynamics network during training. Surprisingly
perhaps, our method performs consistently better than baselines, even for a small overall simulation length, as this already enables better policy approximations.
In Appendix A we provide incremental results for the task of predicting road networks based on a optimal set of key points. In Appendix B we provide insights concerning interpretability and further comparison to baselines based on the varying difficulty of the predicted underlying road networks. In Appendix C we give more information regarding the generation of the synthetic dataset, while in Appendix D more information regarding the model architecture. Final in Appendix E we provide more implementation decisions, including details on exactly how key points and generated and how individual patch-level predictions are fused together. More examples of full environment trajectories are given in Appendix F. We stress that our method can act on partially initialized predictions, registering it also as a practical refinement approach on top of any baseline. Initializing our model according to the ARM model allows a moderately quick fine-tuning phrase. In combination with the learned environment model, which circumvents expensive calls to the edge embedding model for each simulation step in the MCTS, allows us to train even on a single GPU.
5 CONCLUSIONS
We presented a novel reinforcement learning framework for generating a graph as a variable-length edge sequence, where a structured-aware decoder selects new edges by simulating action sequences into the future. Importantly, this allows the model to better capture the geometry of the targeted objects. This is confirmed by our experimental results, since our approach consistently produces more faithful graph statistics. One advantage of the proposed method is that the reward function is based on (non-continuous) metrics that are directly connected to the application in question. Our approach does not require significantly more computational resources compared to state-of-the-art supervised approaches, while in addition, it can be used to refine predictions from another given model. We also remark that the direct prediction of a graph enables the concurrent prediction of meta-information about the edges, including, for instance, the type of road (highway, primary or secondary street, biking lane, etc).
Our approach opens the door to several directions for future work. For example, we have assumed that a pre-defined model gives the location of key points, but one could instead augment the action space to propose new key points’ locations. Other promising directions include the direct prediction of input-dependent graph primitives, e.g. T-junctions or roundabouts. Finally, we emphasize that our approach is suitable to a wide variety of applications where autoregressive models are typically used, and it is of special interest when there is a need for complex interactions or constraints between different parts of an object.
6 REPRODUCIBILITY STATEMENT
We have taken multiple steps to ensure reproducibility of the experiments. We refer the reader to Appendix E for a complete description of the training protocol. We have also released the code as part of the supplementary material, including scripts on how to reproduce our results.
A MORE EXPERIMENTS
We first assess the performance of our proposed method in an ideal scenario where the key points, correspond to the ones from the ground truth. To hinder training and inference, we insert additional key points as (1) random intermediate points between known edges and (2) randomly sampled locations in the images. Here, our assumption in Section 3 that the set V ′ suffices to generate the ground truth graph, holds by construction. We compare our method against several baselines that learn to connect edges between key points, using the same feature extraction pipeline, described in Section 3.1, as our model. Cls is a classification network that predicts for all pairs of key points a value {0, 1} corresponding to the existence of an edge. GCN implements a graph neural network that predicts directly the adjacency matrix. We also present an autoregressive version of our model ARM, that is trained with cross-entropy loss to predict the pre-defined ordered sequence of key points. We use this model to initialize ours. Results are presented in Table 3.
As expected, the ARM model achieves a low perplexity score when evaluated against the corresponding sequence, ordered according to the autoregressive order, but suffers in predicting the edges when in random order. The ARM underperforms because of frequent early terminations and the implicit inability to revisit key points, what the desired final metric is concerned, here APLS. Even though our model is developed upon this autoregressive model, it generates tokens in an arbitrary arrangement. Reward and value estimates enable a different training scheme that deeply correlates with the desired objective.
B INTERPRETABILITY
We visualize attention (of the Transformer II module), using the attention flow proposed in Abnar & Zuidema (2020), in Fig. 9. To create attention scores per edge, we aggregate scores for the pair of tokens that define each edge. New predictions lay increased attention to already generated junctions, parallel road segments, and other edges belonging to the same road segment.
We also compare APLS results achieved by varying the difficulty of the ground truth images in terms of the total number of junctions (vertices with a degree greater than 2) and in terms of the average length of road segments that are present, in Fig. 10. Our method explicitly captures information re-
garding the degree of the key points during the search, while it can encode better global information, even across larger distances. It is not a surprise perhaps then, that it outperforms the baselines more convincingly as the difficulty of the ground truth road network increases.
Finally, we visualize an example of an imagined rollout trajectory at a single step of our algorithm in Fig. 11. During a single inference step, our method uses tree search to look ahead into sequences of actions in the future. For our example, we have chosen a relatively smaller number of simulations (10) for better visual inspection. We also show the corresponding environment states reached, which are, however, not explicitly available to the model, as it is searching and planning using a learned model of the environment.
C DATASET CREATION
We use CityEngine, a 3D modelling software for creating immersive urban environments. We generate a simple road network and apply a rural city texture on the created city blocks, provided by Kong et al. (2020). We then uniformly generate trees of varying height and size along the side walks of the generated streets. We then iteratively scan the generated city by passing a camera of specific orientation and height. We repeat the same process after suitable modifications to the texture, for the generation of the street masks, as well as the vegetation masks, that correspond to only the plants along the side walks. Some examples of the generated images are provided in Fig. 12. We note that additional occlusion can be caused by the relation of the camera with the 3D meshes corresponding to buildings. These occlusions are, however, not captured by our generated masks, and we can expect them to contribute partially to the fragmented segmentation results.
We train a segmentation-based model, LinkNet, as our baseline. We rasterize the ground truth graph to create pixel-level labels and train by maximizing the intersection over union, which is commonly done in practice. We note that there is a tradeoff between the nature of the predictions and the choice of the line-width with which the ground truth graph is rasterized. A large width achieves better results in terms of connectivity of the predicted graph but results in poorer accuracy in the final key points’ locations. Furthermore, when providing a large width, areas in the image with more uncertainty, e.g. vegetation that is not above a road segment, are also predicted as road networks with high certainty, leading to spurious, disconnected road segments. To highlight the advantages of our method compared to this baseline and in order to promote more meaningful predictions, we select a relatively smaller width.
D ARCHITECTURE DETAILS
As an image backbone model, we use a ResNet-18 for the synthetic dataset and a ResNet-50 for the real dataset experiments. We extract features at four different scales, after each of the 4 ResNet layers. To extract features for each key point, we interpolate the backbone feature maps based on the key points’ locations. We use different learned embeddings based on the actual key points’ locations. For the key points embedding model, we use a transformer encoder with 16 self-attention layers and a dropout rate of 0.15. We use layer normalization and GELU activation functions.
For the edge-embeddings model, we use the respective key points embedding, along with learned position and type embeddings, which we all sum together. As aforementioned, we can initialize the current edge sequence based on previous predictions, allowing our model to refine any initial prediction provided. Again, we use the same transformer architecture with 16 self-attention layers, and a dropout rate of 0.15.
Finally, the architecture of the dynamics network and the value prediction network are shown in Fig. 13. For the value estimation, we also provide the current environment step, as we execute steps in an environment with a bounded time horizon.
E IMPLEMENTATION DETAILS
E.1 EVALUATION METRICS
APLS (Van Etten et al., 2018) constitutes a graph theoretic metric that faithfully describes routing properties. APLS is defined as
APLS = 1− 1 Np ∑ pv1v2<∞ min { 1, |pv1v2 − pv1′v′2 | pv1v2 } , (4)
where v and v′ denote a source node and its closest point on the predicted graph if such exists within a buffer. Np denotes the number of paths sampled and pv1v2 the length of the shortest path between two nodes. Similarly, the Too Long Too Short (TLTS) metric (Wegner et al., 2013) compares lengths of the shortest paths between randomly chosen points of the two graphs, classifying them as infeasible, correct, or too-long or too-short (2l+2s) if the length of the path on the predicted graph does not differ by more than a threshold (5%) compared to the ground truth path. Since small perturbations to the predicted graph can have larger implications to pixel-level predictions, the definitions of precision, recall and intersection over union were relaxed in Wiedemann et al. (1998); Wang et al. (2016) leading to the metrics Correctness/Completeness/Quality (CCQ).
Still, some types of errors, such as double roads or over-connections, are not penalized from the above metrics (Citraro et al., 2020). We therefore additionally include new metrics introduced in Citraro et al. (2020) that compare Paths, Junctions and Sub-graphs of the graphs in question, producing respectively precision, recall and f1 scores. For the final similarity score used in Eq. 3, we use a linear combination of the aforementioned metrics, more details are available in the supplementary material.
E.2 DATASET INFORMATION
We use the following datasets to train our models, i.e. baselines and our newly proposed RL agent.
SpaceNet (Van Etten et al., 2018) includes a road network of over 8000 Km over four different cities: Vegas, Paris, Shanghai, and Khartoum, where the complexity and quality, and regularity of the road network depend on the city of origin. Satellite images are provided at a pixel resolution of 1300 × 1300, corresponding to a ground resolution of 30cm per pixel. We split the 2780 total images into crops of size 400×400 with an overlap of 100 pixels for training. To better highlight the diversity of the satellite images from these four different locations, we have included some randomly sampled examples in Fig. 14.
DeepGlobe (Demir et al., 2018) contains satellite images from 3 different locations with pixellevel annotations. Images have a resolution of 1024 × 1024, with a ground resolution of 50cm per pixel. We crop the 6226 images into tiles, leading to a similar ground truth resolution per pixel compared to SpaceNet.
E.3 TRAINING DETAILS
At each MCTS search step, we perform several simulations from the root state s0 for a number of steps k = 1, . . . and select an action that maximizes the upper confidence bound (Silver et al., 2018),
ak = argmax a
[ Q(s, a) + P (s, a) √∑ b N(s, b)
1 +N(s, a)
( c1 + log (∑ b N(s, b) + c2 + 1
c2
))] ,
where N(s, a), Q(s, a), P (s, a) corresponds to the visit counts, mean values and policies, as calculated by the current search statistics. Constants c1, c2 balance exploration and exploitation. Based on a state sk−1 and a selected action ak, a new state sk and reward r̂k are estimated through the dynamics network. We update the mean values based on bootstrapped values of the estimated value functions and rewards. We experimented with training the reward and value support predictions with both mean squared error (MSE) and cross-entropy loss. We opted for MSE because of its stability. For a more in depth description of the training scheme of MuZero we recommend Schrittwieser et al. (2020) and Ye et al. (2021).
As hinted in the main text, we train using intermediate rewards, a linear combination of topological metrics. We experimented using a variety of different scores and metrics, but ended up using APLS, Path-based f1, Junction-based f1 and Sub-graph-based f1 at a relative scale of (0.35, 0.25, 0.25, 0.15). We found the Sub-graph-based f1 to be more sensitive to small perturbations and therefore weighted it less in the final combination. The metrics mentioned above are highly correlated, as examined in Batra et al. (2019). This correlation, though, holds when comparing the final predictions. Intermediate incremental rewards are more independent, so we still found it useful to use a mixture of them. Initially, to let our network learn basic stable rewards, we use the segmentation prediction mask as target. That means that we train our model to predict the graph that can be extracted after post-processing the segmentation model’s prediction.
After pre-training the autoregressive model, we experimented with fine-tuning using RL with two different learning rates, where a slower by a factor (0 − 1] rate was chosen for the pre-trained modules. Here, we noticed that the model still performed better than the ARM baseline. As it has trouble though to escape the autoregressive order, compared to the single learning rate model, results are less optimal.
We finally note that by avoiding type and position encoding in the Transformer II module, we can ensure the embedded graph is permutation invariant regarding the sequence of edges and the order of key points within an edge. Our search graph can then be formulated into a directed acyclic graph, circumventing unnecessary division of the search space (Browne et al., 2012; Childs et al., 2008), enabling more efficient sampling (Saffidine et al., 2012). These updated search statistics are cumbersome to compute, though, and we found no significant efficiency improvement. They do, however, confirm our model’s potential ability to handle the input graph as an unordered set, as the problem suggests.
E.4 PRODUCING KEY POINTS
We initially train a segmentation model for predicting pixel-level accurate masks of the road network. For this step, we can use any model from the literature. We extract the predicted graph by
skeletonizing the predicted mask and simplifying the graph by a smoothing threshold. We then sample intermediate vertices along the largest in terms of ground length edges, to enlarge the action space. We illustrate a toy example of such a process in Fig. 15. To accelerate inference, we can also initialize our prediction graph based on the provided segmentation mask. In such a case, our method closer resembles previous refinement approaches. We additionally remove edges of connected components with small overall size and edges belonging to roads segments leading to dead ends (that means vertices of degree one), keeping though the corresponding key points in the environment state. Thus, if our model deems the existence of the respective edges necessary, it can add them once more. We plan to further investigate augmenting the action space with the ability to remove edges in future work, that would not require such a pre-processing strategy.
E.5 COMBINING PREDICTIONS
When creating the final per image prediction, we initially simply generated predictions on nonoverlapping patches and fused them together. To overcome small pixel location differences in the predicted graphs, we fuse by rasterizing the individual graphs in the pixel domain with a line width larger than 1. What we found more successful was to perform inference on overlapping patches and to initialize the currently predicted graph based on the predictions made so far. This is particularly useful, as road segments are often close to the boundaries of our cropped image. Individual inference and simple fuse can often lead to over-connected predictions. We visualize a toy example of such a process in Fig. 16.
Inference Simple Fusion
For the segmentation baselines, unless specified in their respective documentation, we perform inference by cropping images to overlapping patches and normalizing the final predicted mask based on the number of overlapping predictions per pixel location. We also pad images around their boundary, as done in Acuna et al. (2019). We note some small differences in the final scores for the Orientation model (Batra et al., 2019) and the SpaceNet dataset, compared to the ones in Citraro et al. (2020). We assume these are an outcome of different chosen parameters for the calculation of metrics. We keep these parameters fixed when calculating scores for all methods.
E.6 MORE COMPARISONS WITH BASELINES
We elaborate more on the evaluation method on Sat2Graph. The authors provided predictions corresponding only to a center crop of the original SpaceNet dataset images. For each 400× 400 pixel image, predictions are made for the center 352 × 352 area of the image. One could expect slightly better results if trained in the same conditions but that the gap does still seem large enough to show the merits of our approach.
Other baselines like Neural turtle graphics (Chu et al., 2019) and Topological Map Extraction (Li et al., 2019b) do not have an implementation available. We do not compare against VecRoad (Tan et al., 2020) or RoadTracer (Bastani et al., 2018), as different datasets were used for the current evaluations. These baselines have been already shown to underperform though in the literature, by methods that we are comparing against.
F MORE EXAMPLES
We showcase in Fig. 17 and Fig. 18 more examples of the environment state progression, for the synthetic dataset. | 1. What is the focus of the paper regarding spatial networks of roads on satellite images?
2. What are the strengths and weaknesses of the proposed solution based on MCTS and MuZero?
3. Do you have any concerns regarding the generalizability of the method to different environments?
4. How does the reward system work in the proposed method, and what is the role of MuZero?
5. Can you explain how the RL system is trained, and how the training and evaluation datasets are related?
6. What are the trade-offs of the proposed method in terms of computational complexity compared to existing methods?
7. How does the system handle input and output information in a real-world scenario?
8. Can you clarify the concept of "dreamt tree search" presented in Figure 11?
9. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper describes a solution for learning how to "draw" spatial networks of roads on top of satellite images. The solution is based on MCTS and MuZero. There are limited details in terms of the dataset used for training and how the system is actually deployed after training.
The system is evaluated with synthetic data and real-world satellite images. It is unclear if the proposed solution generalizes to images that are "different" from those used in the training set.
Strengths And Weaknesses
Strengths
The authors consider an interesting problem with clear practical applications.
The method used to generate synthetic data is original and interesting.
It appears that the solution proposed by the authors offers better performance compared to other state-of-the-art methods.
Weaknesses:
As it is described in the paper, it seems that the proposed method needs the ground truth for actually receiving the reward. It is unclear if the methods is generalizable to different environments. This is not really evaluated in the paper (or at least it is not explicitly described in the paper).
The reference to MuZero and the description of its application is quite hard to understand, especially in relation to the definition of the rewards. The same applies to the use of a MCTS, which is also only sketched in my opinion.
The reviewer was not able to understand how the RL is actually trained. In fact, it is unclear if the RL system is evaluated using the same dataset used for training.
The authors refer a learnable model of a dynamics network G, but it is difficult to map it to the actual inputs (and outputs) of the model.
The values of the metrics used in the evaluation have very similar values. I believe that the authors should indicate the confidence intervals of these values. The confidence intervals might be overlapping.
The authors should present the trade-off of the proposed methods in terms of computational complexity. Is the proposed method computationally expensive compared to the existing ones?
There is limited information about the potential actual deployment post-training in terms of information needed as input. The authors should also clarify the outputs that are provided by the system in a clearer way in my opinion.
The reviewer was not able to understand the concept of "dreamt tree search" presented in Figure 11.
Clarity, Quality, Novelty And Reproducibility
The idea of this paper is original and interesting, however, the authors do not clear show if and how it is generalizable to images with different characteristics. The reviewer was not able to understand the training process (and dataset) used by the authors. It is also unclear if there is any overlap between the training and evaluation datasets.
The presentation style of this work can be improved: in fact, some aspects are described in very generic terms and not in relation to the specific problem (in particular the use of MCTS and the application of MuZero in this situation).
In general, the reproducibility of the work is somehow limited since the description of the use of MuZero and MCTS is only sketched. The steps needed for the training and deployment of the system are also described in very abstract terms. |
ICLR | Title
DynaMS: Dyanmic Margin Selection for Efficient Deep Learning
Abstract
The great success of deep learning is largely driven by training over-parameterized models on massive datasets. To avoid excessive computation, extracting and training only on the most informative subset is drawing increasing attention. Nevertheless, it is still an open question how to select such a subset on which the model trained generalizes on par with the full data. In this paper, we propose dynamic margin selection (DynaMS). DynaMS leverages the distance from candidate samples to the classification boundary to construct the subset, and the subset is dynamically updated during model training. We show that DynaMS converges with large probability, and for the first time show both in theory and practice that dynamically updating the subset can result in better generalization. To reduce the additional computation incurred by the selection, a light parameter sharing proxy (PSP) is designed. PSP is able to faithfully evaluate instances following the underlying model, which is necessary for dynamic selection. Extensive analysis and experiments demonstrate the superiority of the proposed approach in data selection against many state-of-the-art counterparts on benchmark datasets.
1 INTRODUCTION
Deep learning has achieved great success owing in part to the availability of huge amounts of data. Learning with such massive data, however, requires clusters of GPUs, special accelerators, and excessive training time. Recent works suggest that eliminating non-essential data presents promising opportunities for efficiency. It is found that a small portion of training samples 1 contributes a majority of the loss (Katharopoulos & Fleuret, 2018; Jiang et al., 2019), so redundant samples can be left out without sacrificing much performance. Besides, the power law nature (Hestness et al., 2017; Kaplan et al., 2020) of model performance with respect to the data volume indicates that loss incurred by data selection can be tiny when the dataset is sufficiently large. In this sense, selecting only the most informative samples can result in better trade-off between efficiency and accuracy.
The first and foremost question for data selection is about the selection strategy. That is, how to efficiently pick training instances that benefit model training most. Various principles have been proposed, including picking samples that incur larger loss or gradient norm (Paul et al., 2021; Coleman et al., 2020), selecting those most likely to be forgotten during training, as well as utilizing subsets that best approximate the full loss (Feldman, 2020) or gradient (Mirzasoleiman et al., 2020; Killamsetty et al., 2021). Aside from selection strategies, existing approaches vary in the training schemes which can be divided roughly into two categories: static ones and dynamic (or adaptive) ones. Static methods (Paul et al., 2021; Coleman et al., 2020; Toneva et al., 2019) decouple the subset selection and the model training, where the subset is constructed ahead and the model is trained on such a fixed subset. Dynamic methods (Mindermann et al., 2022; Killamsetty et al., 2021), however, update the subset in conjunction with the training process. Though effectively eliminates amounts of samples, it is still not well understood how these different training schemes influence the final model.
∗Corresponding author 1We use the terms data, sample, and instance interchangeably
In this paper, we propose dynamic margin selection (DynaMS). For the selection strategy, we inquire the classification margin, namely, the distance to the decision boundary. Intuitively, samples close to the decision boundary influence more and are thus selected. Classification margin explicitly utilizes the observation that the decision boundary is mainly determined by a subset of the data. For the training scheme, we show the subset that benefits training most varies as the model evolves during training, static selection paradigm may be sub-optimal, thus dynamic selection is a better choice. Synergistically integrating classification margin selection and dynamic training, DynaMS is able to converge to the optimal solution with large probability. Moreover, DynaMS admits theoretical generalization analysis. Through the lens of generalization analysis, we show that by catching the training dynamics and progressively improving the subset selected, DynaMS enjoys better generalization compared to its static counterpart.
Though training on subsets greatly reduces the training computaiton, the overhead introduced by data evaluation undermines its significance. Previous works resort to a lighter proxy model. Utilizing a separate proxy (Coleman et al., 2020), however, is insufficient for dynamic selection, where the proxy is supposed to be able to agilely adapt to model changes. We thus propose parameter sharing proxy (PSP), where the proxy is constructed by multiplexing part of the underlying model parameters. As parameters are shared all along training, the proxy can acutely keep up with the underlying model. To train the shared network, we utilize slimmable training (Yu et al., 2019) with which a well-performing PSP and the underlying model can be obtained in just one single train. PSP is especially demanding for extremely large-scale, hard problems. For massive training data, screening informative subset with a light proxy can be much more efficient. For hard problems where model evolves rapidly, PSP timely updates the informative subset, maximally retaining the model utility.
Extensive experiments are conducted on benchmarks CIFAR-10 and ImageNet. The results show that our proposed DynaMS effectively pick informative subsets, outperforming a number of competitive baselines. Note that though primarily designed for supervised learning tasks, DynaMS is widely applicable as classifiers have become an integral part of many applications including foundation model training (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021; Chen et al., 2020), where hundreds of millions of data are consumed.
In summary, the contributions of this paper are three-folds:
• We establish dynamic margin select (DynaMS), which selects informative subset dynamically according to the classification margin to accelerate the training process. DynaMS converges to its optimal solution with large probability and enjoys better generalization.
• We explore constructing a proxy by multiplexing the underlying model parameters. The resulting efficient PSP is able to agilely keep up with the model all along the training, thus fulfill the requirement of dynamic selection.
• Extensive experiments and ablation studies demonstrate the effectiveness of DynaMS and its superiority over a set of competitive data selection methods.
2 METHODOLOGY
To accelerate training, we propose dynamic margin selection (DynaMS) whose framework is presented in Figure 1. Instances closest to the classification decision boundary are selected for training, and the resulting strategy is named margin selection (MS). We show that the most informative subset changes as the learning proceeds, so that a dynamic selection scheme that progressively improves the subset can result in better generalization. Considering the computational overhead incurred by selection, we then explore parameter sharing proxy (PSP), which utilizes a much lighter proxy model to evaluate samples. PSP is able to faithfully keep up with the underlying model in the dynamics selection scheme. The notations used in this paper are summarized in Appendix H
2.1 SELECTION WITH CLASSIFICATION MARGIN
Given a large training set T = {xi, yi}|T |i=1, data selection extracts the most informative subset S ⊂ T trained on which the model f(x) yields minimal performance degradation. Towards this end, we utilize the classification margin, that is, the distance to the decision boundary, to evaluate the informativeness of each sample. |S| examples with the smallest classification margin are selected.
Intuitively, these samples should be influential most to the model decision. Following (Mickisch et al., 2020; Emam et al., 2021), the decision boundary between two classes c1 and c2 ∈ {1, . . . C} is B := {x | fc1(x) = fc2(x)}, where fc(x) is the c entry of model output, indicating the probability of x belonging to class c. The classification margin is then:
M(x, c1, c2) = min δ
∥δ∥2 s.t. x+ δ ∈ B (1)
which is the minimal perturbation required to move x form c1 to c2. Directly computing the margin is infeasible for deep neural networks, so scoring is conducted in the feature space instead as in (Emam et al., 2021). Typically neural networks applies a linear classifier on top of the features (Goodfellow et al., 2016), so the classification margin M(x, c1, c2) can be easily obtained as: M(x, c1, c2) = (Wc1 −Wc2)⊤h(x)/ ∥Wc1 −Wc2∥2, where W ∈ Rd×C is the weight of the linear classifier 2 and h(x) is the feature of x. In this way, the classification margin of a labeled sample (x, y) along class c is M(x, y, c) if y ̸= c or minc̸̃=y M(x, y, c̃) if y = c. The former indicates the distance moving (x, y) to class c while the latter is the distance moving (x, y) to the nearest class other than y. To keep the subset balanced, we evenly pick |S|/C samples with the smallest classification margin along each class. The resulting strategy is named margin selection (MS), denoted as MS(w, T , |S|). The procedure is detailed in Algorithm 1 in Appendix A.
2.2 DYNAMIC SELECTION
Given the subset selected, model is subsequently trained on S. Conventional static training scheme assumes that the optimal subset converges and is not related to the model training dynamic (Paul et al., 2021; Coleman et al., 2020). Though effectively eliminate instances, the "converged optimal
2Without loss of generality, we omit the bias term for notation clarity
subset" assumption may be too strong. To investigate whether the most informative samples vary during training, we plot the overlap ratio of samples selected in two consecutive selections during the training of ResNet models, shown in Figure 2(a). We train for 200 epochs and 120 epochs on CIFAR-10 and ImageNet respectively, and conduct selection every 10 epochs. It can be observed that the overlap ratio is on average 0.83 for CIFAR-10 and 0.73 for ImageNet rather than 1.0, meaning that samples that most benefit model training vary as the model evolves. A fixed subset may be outdated after parameter updates, thus yielding sub-optimal results.
We thus resort to a dynamic scheme where data selection is performed after each Q epochs training 3. By selecting in conjunction with training, the informative subset gets updated according to the current model status. For the kth selection, the informative subset Sk is constructed by picking portion γk samples so that |Sk| = γk|T |. The selection ratio γk determines the critical margin κk, where only samples with classification margin smaller than κk are kept. Sk will then be used for training Q epochs. In the following, we provide a convergence analysis of DynaMS and show that DynaMS achieves better generalization by constantly improving the selected subset.
Convergence Analysis We now study the conditions for the convergence of training loss achieved by DynaMS. We use logistic regression (LR) to demonstrate and then show the conditions are well satisfied when LR is used on top of deep feature extractors. We have the following theorem: Theorem. Consider logistic regression f(x) = 1
1+e−w⊤x with N Gaussian training samples x ∼
N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (2)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
The proof is left in Appendix B. Theorem 2.2 indicates that dynamically selecting data based on the classification margin is able to converge and achieve the optima w∗ with large probability. The Gaussian input assumption is overly strong in general, but when the linear classifier is adopted on top of a wide enough feature extractor, the condition is well satisfied because a infinitely wide neural network resembles Gaussian process (Lee et al., 2019; Xiao et al., 2018; de G. Matthews et al., 2018).
Generalization Analysis Recently, (Sorscher et al., 2022) developed an analytic theory for data selection. Assume training data xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o · xi ) . Following static selection, when an estimator w is used to pick samples that have a small classification margin, the generalization error takes the form E(α, γ, θ) in the high dimensional limit. α = |T |d indicates the abundance of training samples before selection; γ determines the selection budget and θ = arccos ( w⊤wo
∥w∥2·∥wo∥
) shows the closeness of the
estimator to the oracle. The full set of self-consistent equations characterizing E(α, γ, θ) is given in Appendix C. By solving these equations the generalization error E(α, γ, θ) can be obtained. We then extend it to the dynamic scheme. For the kth selection, we use the model trained on Sk−1 as the estimator wk, which deviates from oracle by angle θk = arccos ( w⊤k−1wo
∥wk−1∥2·∥wo∥2
) , to evaluate
and select samples. The resulting subset Sk will be used for subsequent training of model wk+1, which will later be used as an estimator at k + 1 to produce Sk+1. In this way, generalization of dynamic scheme can be obtained by recurrently solving the equations characterizing E(α, γk, θk) with updated keeping ratio γk and estimator deviation θk. Note that in each round of selection, samples are picked with replacement, so the abundance of training samples α is kept fixed. The keeping ratio γk, determining the subset size, can be scheduled freely to meet various requirements.
3For extremely large dataset case where training can be accomplished within just one or a few epochs, the selection can be performed every Q iterations
We compare the generalization of dynamic selection and its static counterpart in Figure 2(b). We show the landscape of E(α, γ, θ) with different γ and θ by solving the generalization equations numerically. α = 3.2 is kept fixed, which means the initial training data is abundant; We use static training with θs = 40◦ and γs = 0.6 as control group. To make the comparison fair, we make sure 1 K ∑K k=1 |γk| = γs, so that the averaged number of samples used in the dynamic scheme equals the subset size used in the static scheme. From Figure 2(b), we see that in dynamic selection, the estimator gets constantly improved (θk decreases), so that the subsets get refined and the model achieves better generalization. Discussion on selecting with different α, γand θ is given in Appendix D.
2.3 PARAMETER SHARING PROXY
With dynamic selection, the number of updates is reduced. However, the computational overhead incurred by data selection undermines its significance, especially when the model is complex and samples are evaluated frequently. Aside from designing efficient selection strategies, previous works explored utilizing a lighter model as proxy to evaluate the instances so that the problem can be ameliorated. Pretrain a separate proxy and evaluate instances prior to model training (Coleman et al., 2020), however, is insufficient for dynamic selection, as a static proxy can not catch the dynamics of the underlying model. A proxy that fulfills the requirements of dynamic selection is still absent.
We thus propose parameter sharing proxy (PSP), where part of the model is used as the proxy. Taking convolutional neural network as an example, for a layer with kernel W ∈ Rci×co×u×u, where ci, co and u are number of input filters, number of output filters and kernel size respectively, the corresponding kernel of proxy is then: Wproxy = W1:pci,1:pco,:,:, where p ∈ [0, 1] is a slimming factor. As shown in Figure 3, the proxy kernel is constructed with the first pci input channels and first pco output channels. A p times thinner proxy can be obtained by applying p to each layer.
With separate batch normalization for proxy and model, PSP forms a slimmable network (Yu et al., 2019), where multiple models of different widths are jointly trained and they all yield good performance. As the parameters are shared, the proxy can acutely keep up with the model change, thus applicable for dynamic selection. We further investigate the gradients alignment of the proxy and the original model through their cosine similarity:
cos(g, gproxy) = g⊤gproxy
∥g∥2 · ∥g∥2 , where g = ∇WL (W) , gproxy = ∇WL (Wproxy) (3)
A positive cosine value indicates gproxy stands in the same side with g, thus updates on proxy and the model benefits each other. We compare the gradient alignment of PSP and a stand-alone proxy in Figure 2(c) on ResNet-50. With p = 0.5, we see that cos(g, gproxy) for PSP is much larger than the stand-alone proxy. Given the well-aligned gradients, PSP requires fewer training epochs. Overall workflows of DynaMS and DynaMS+PSP is shown in Algorithm 2 and Algorithm 3 of Appendix A. PSP is especially advantageous for large and hard problems. When the data is extremely large, training PSP on a small subset is cheaper than evaluating the extremely large training set with the original model, making it much more efficient. When the task is hard and model changes rapidly during training, PSP can timely updates the informative subset, maximally retaining the model utility.
3 RELATED WORK
Accelerating training by eliminating redundant training instances has long been a research focus in academia. This is accomplished by adopting an effective selection strategy and an appropriate training scheme. We summarize the related literature from these two strands of research in the following.
Selection Strategy Sample selection can be accomplished with various principles. (Loshchilov & Hutter, 2015; Jiang et al., 2019; Paul et al., 2021) tend to pick samples that incur large loss or gradient norm (CE-loss, EL2N, GraNd). (Toneva et al., 2019) inspects the “unforgettable”
examples that are rarely misclassified once learned, and believes these samples can be omitted without much performance degradation. Other works adopt uncertainty. Samples with the least prediction confidence are preferred (Settles, 2010). Recently, (Mirzasoleiman et al., 2020; Killamsetty et al., 2021) select subset that best covers or approximates the full gradient (Craig, GradMatch). However, these requires per-sample gradient as well as an additional optimization which is expensive both in run-time and in memory. Our work utilizes the classification margin to identify informative samples,
which is efficient and can synergistically adapt to various training schemes. Comparison of these strategies is given in Table 1, where d is the dimension of data feature. MS is slightly slower than selection via loss (CE-loss and EL2N), but much more efficient than Craig and GradMatch. Here we consider only the complexity of the selection strategy itself, time spent for feature extraction is not included. Classification margin has been previously explored in the active learning literature (Ducoffe & Precioso, 2018; Emam et al., 2021), here we utilize it for training acceleration.
Training Schemes Data selection brings more options to training. Under the conventional static training scheme (Paul et al., 2021; Toneva et al., 2019; Coleman et al., 2020), data selection is conducted prior to model update, and the informative subset is kept fixed. Contrastively, online batch selection picks batch data each iteration (Loshchilov & Hutter, 2015; Alain et al., 2015; Zhang et al., 2019; Mindermann et al., 2022). Though sufficiently considered the training dynamics, the overly frequent sample evaluation incurs prohibitive computational overhead. Recently, (Killamsetty et al., 2021) tried selecting after several epochs’ training, which is similar to our dynamic scheme. However, the dynamic training scheme is just utilized as a compromise to avoid overly frequent selection. A formal analysis of its advantage over the static scheme is absent.
By systematically considering the selection strategy, the model training, as well as the proxy design, Our proposed DynaMS forms an effective data selection framework for efficient training.
4 EXPERIMENTS
In this section, we first analyse the effectiveness of each design ingredient in Section 4.2. Then we compare to state-of-the-art algorithms in Section 4.3. Code is available at https://github. com/ylfzr/DynaMS-subset-selection.
4.1 EXPERIMENTAL SETUP
We conduct experiments on CIFAR-10 Krizhevsky & Hinton (2009) and ImageNet Jia et al. (2009), following standard data pre-processing in He et al. (2016). A brief summarization of the experimental setup is introduced below, while complete hyper-parameter settings and implementation details can be found in Appendix F.
CIFAR-10 Experiments For CIFAR-10, we train ResNet-18 (He et al., 2016) for 200 epochs. Selection is conducted every 10 epochs, so overall there are 19 selections (K = 19). For subset size, we adopt a simple linear schedule: γk = 1− k · a for k = 1, . . . ,K, where a determines the reduction ratio. We make sure γavg = 1K ∑K k=1 γk = γs. In this way, the averaged number of data used in the dynamic scheme (γavg) is kept equal to that of static training (γs) for fair comparison. For 0.6× acceleration, a = 0.042. We conduct experiments on a NVIDIA Ampere A-100.
ImageNet Experiments For ImageNet, we choose ResNet-18 and ResNet-50 as base models. Following the conventions, the total training epoch is 120. Selection is also conducted every 10 epochs, so altogether K = 11. For subset size, aside from the linear schedule, we also explore a power schedule where γk decays following a power law: γk = m · k−r + b for k = 1, 2, . . . ,K. For 0.6× acceleration, we set m = 0.398, r = 0.237 and b = 0.290. Please see Appendix F for
more details. The power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. We conduct experiments on four NVIDIA Ampere A-100s.
4.2 ABLATION STUDIES
We use ResNet-50 on ImageNet to illustrate the effect of each ingredient in DynaMS, that is, the classification margin criteria, the dynamic training scheme as well as the parameter sharing proxy.
The effect of classification margin selection To inspect the effect of classification margin selection (MS), we compare MS against two widely applied selection strategies CE-loss (Loshchilov & Hutter, 2015; Jiang et al., 2019) and EL2N (Paul et al., 2021). CE-loss selects samples explicitly through the cross-entropy loss they incur while EL2N picks samples that incur large L2 error. We compare the three under the conventional static scheme so any other factors aside from the selection strategy is excluded. Samples are evaluated after 20 epochs of pretraining. The model is then reinitialized and trained on the selected subset, which contains 60% original samples. As shown in Table 2, MS achieves the best accuracy among the three, validating its effectiveness.
The effect of dynamic training We then apply dynamic selection on MS, where the average subset size is also kept to be 60% of the original dataset. From Table 2 we see that DynaMS outperforms MS by 1.67%, which is significant on large scale dataset like ImageNet. The superiority of DynaMS validates that by constantly improving the model and updating the subset, dynamic selection scheme can result in better performance. Note that DynaMS can be more practical since it does not require the 20 epochs training prior to selection as required in the static scheme.
The effect of parameter sharing proxy We now study the parameter sharing proxy (PSP). An effective proxy is supposed to be faithful, and can agilely adapt to model updates. In Figure 4, we plot the Spearman rank correlation as well as the overlap ratio of samples selected with the proxy and the model. We see that all along the training, the rank correlation is around 0.68, and over 78% samples selected are the same, indicating that the proxy and the model are fairly consistent. We then investigate how will the complexity, measured by floating point operations (FLOPs), of proxy affect. We enumerate over the slimming factor p ∈ {0.25, 0.5, 0.75, 1.0} to construct proxies of different widths, the corresponding FLOPs are 6.25%, 25.00%, 56.25%, 100% respectively. In Table 3, we see that significant computation reduction can be achieved with moderate performance degradation.
4.3 COMPARISONS WITH STATE-OF-THE-ARTS
Finally, we compare DynaMS against various state-of-the-art methods. Aside from CE-loss and EL2N, Random picks samples uniformly at random. GraNd (Paul et al., 2021) select samples that incur large gradient norm. Forget (Toneva et al., 2019) counts how many times a sample is mis-classified (forget) after it is learned. Samples more frequently forgotten are preferred. We evaluate the forget score after 60 epochs training. To avoid noisy evaluation, many of these static selection approaches ensembles networks before selection. The number of ensambled models is given by the subscription. Auto-assist (Zhang et al., 2019) select samples that incur large loss value on a small proxy. Selection is conducted in each iteration thus forming an online batch selection (OLBS) scheme. DynaCE and DynaRandom apply the corresponding selection strategy, but are trained in a dynamic way. CRAIG and GradMatch propose to reweight and select subsets so that they best cover or approximate the full gradient. In the experiments, we use the per-batch variant of CRAIG and
Figure 4: Correlation of proxy and model.
20 40 60 80
0.66
0.68
0.70
0.72
Sp ea
rm an
C or
re la
ti on
20 40 60 80 Epochs
0.76
0.77
0.78
0.79
0.80
O ve
rl ap
R at
io
GradMatch proposed in (Killamsetty et al., 2021) with 10 epoch warm start 4. The two approaches utilize dynamic selection scheme, all the training settings are kept the same as our DynaMS.
In table 4, average accuracy from 5 runs on CIFAR-10 as well as their running time are reported. Due to limited space, the standard deviation is given in Appendix E. We see that DynaMS achieves comparable performance against the strongest baselines (EL2N10, GraNd10, Forget10) while being more efficient. Note that the static methods require pretraining one or several models for 20 epochs before selection. Considering this cost (subscript of the reported running time), the acceleration of these methods is less significant. We also compare two online batch selection methods, OnlineMS and Auto-assist (Zhang et al., 2019). OnlineMS picks samples with MS, but the selection is conducted each iteration. OnlineMS didn’t outperform DynaMS, meaning more frequent selection is not necessary. Rather, selecting at each optimization step incurs prohibitive computational overhead. Auto-assist didn’t get good performance in this experiment. This may results from the overly simple proxy. The logistic regression proxy adopted may not sufficiently evaluate the candidate samples.
4For cifar-10, we use the published implementation from https: //github.com/decile-team/cords. For ImageNet, we modify the implementation to the distributed setting.
For ImageNet, we also report the average accuracy from 5 runs as well as their running time. The standard deviation is given in Appendix E DynaMS outperforms all the baselines. For instance, it achieves 68.65% and 74.56% top-1 accuracy given on average 60% samples for ResNet-18 and ResNet-50 respectively, surpassing the most competitive counterpart Forget by 0.81% and 1.06%. Compared to the static methods which require additional pretraining, 60 epochs for Forget and 20 for the others, DynaMS is much more efficient. CRAIG and GradMatch didn’t get good performance on ImageNet. This might because we use the per-batch variant in (Killamsetty et al., 2021), and set batch size 512 in order to fit the per-sample gradients into memory. The per-batch variant treats each mini-batch as one sample and selects mini-batches during the gradient matching process. So a larger batch size means more coarse grain selection which may lead to inferior performance. We also compare a variant DynaRandom. DynaRandom adopts the dynamic selection scheme but a random subset is constructed at each selection. DynaMS outperforms DynaRandom by 1.06% and 1.93% for ResNet-18 and ResNet-50 respectively, indicating that the superiority of DynaMS over static methods comes from effectively identifying informative samples instead of witnessing more data.
ResNet-50 is rather complex and the data evaluation time is non-negligible. We thus apply parameter sharing proxy to reduce the evaluation time. The proxy is 0.5× width so the evaluation requires around 0.25× computation compared to the original model. As the gradients of the proxy and the underlying model are well aligned, we only train DynaMS+PSP for 90 epochs. From table 5, though utilizing a proxy harms performance compared to DynaMS, it still outperforms all the other baselines. Specifically, SVP also uses a proxy for sample evaluation. The proxy, however, is a statically fully trained ResNet-18. The superiority of DynaMS+PSP over SVP shows the necessity of a dynamic proxy that agilely keeps up with the change of underlying model. The advantages of DynaMS+PSP over DynaMS on efficiency can be significant for extremely large scale problems where massive data is available while only a small fraction of data is sufficient for training. To further demonstrate DynaMS, we draw the accuracy curvature of ResNet-50 against different (on average) sample budgets from 60% to 100% in Figure 5. It can be found that our DynaMS consistently outperforms all the other data selection strategies on different budgets. Finally, To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. See G for more details.
5 CONCLUSION
In this paper, we propose DynaMS, a general dynamic data selection framework for efficient deep neural network training. DynaMS prefers samples that are close to the classification boundary and the selected "informative" subset is dynamically updated during the model training. DynaMS has a high probability to converge and we pioneer to show both in practice and theory that dynamic selection improves the generalization over previous approaches. Considering the additional computation incurred by selection, we further design a proxy available for dynamic selection. Extensive experiments and analysis are conducted to demonstrate the effectiveness of our strategy.
A APPENDIX
A ALGORITHM PROCEDURE
Algorithm 1 outlines the procedure of margin selection (MS). In MS, distances of the current sample (x, y) to each other class c are computed. If y ̸= c, the classification margin of (x, y) and class c is M(x, y, c), which is the distance of moving x from class y to class c. If y = c, the classification margin is minc̃ ̸=y M(x, y, c̃), which corresponds to the distance moving (x, y) to another class that is the most close to x. For the whole candidate set T , this generates a |T | × C score matrix. After the classification margins are obtained, |S|/C samples with the smallest classification margin along each class are picked. This keeps samples collected in the subset balanced.
Algorithm 1 Margin selection: MS(w, T , γ) Input:
Candidate set T , keeping ratio γ, number of classes C; Network with weights w, including weights of the final classification layer W ;
Output: Selected subset according to the classification margin S.
1: Compute the keeping budget |S| = γ · |T |, initialize the subset S = {} // Evaluating: compute the classification margin. 2: for (x, y) ∈ T do 3: for c = 1 : C do 4: Compute the classification margin of the sample to the (y, c) boundary:
M(x, y, c) =
{ minc̸̃=y M(x, y, c̃) y = c
M(x, y, c) y ̸= c (4)
5: end for 6: end for
// Selecting: pick the samples according to classification margin (Equation 4.) 7: for c = 1 : C do 8: Pick |S|/C samples which have the smallest classification margins (M(·)): Top|S|/C(c). 9: S = S ⋃ Top|S|/C(c)
10: Remove the already selected samples from the candidate set: T = T − Top|S|/C(c) 11: end for
Algorithm 2 Dynamic margin selection (DynaMS) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wt, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via stochastic gradient descent on Sk.
10: end for
Algorithm 3 Dynamic margin selection (DynaMS) with parameter sharing proxy (PSP) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q Slimming factor of the proxy r, thus the proxy weights Wproxy is determined.
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wtproxy, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via optimizing L(W) + L(Wproxy) on Sk. (Slimmable training)
10: end for
A full workflow of efficient training with the proposed dynamic margin selection (DynaMS) is shown in Algorithm 2. The model is first trained on the full dataset T for Q epochs to warm up. Subset selection kicks in each Q epochs, samples are evaluated with the current model so the informative subset gets updated according to the distance of samples to the classification boundary. After selection, the model is trained on the selected subset until the next selection. The workflow incorporating parameter sharing proxy is shown in Algorithm 3. Different from naive DynaMS, samples are evaluated and selected with the proxy instead of the underlying model. During the Q epochs’ training, the proxy and the original model are updated simultaneously with slimmable training (Yu et al., 2019).
B PROOF FOR THEOREM 2.2
To prove Theorem 2.2, we first inspect the norm of x. We get the following lemma.
Lemma 1. For Gaussian data x ∼ N (0,Σ), let µ > 0, T > 1 be constants, d the dimension of x and λ the largest eigenvalue of the covariance Σ, then with probability at least 1 − 1µTd , ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 .
Proof of Lemma 1. For x ∼ N (0,Σ), ∥x∥22 follows a generalized chi-squared distribution. The mean and variance can be computed explicitly as E[∥x∥22] = trΣ = ∑ j λj and Var(∥x∥ 2 2) =
2trΣ2 = 2 ∑
j λ 2 j . By Chebyshev’s inequality, we have
Pr ∥x∥22 <∑λj +√µTd√2∑ j λ2j > 1− 1 µTd
where µ > 0 and T > 1 are constants and d is the dimension of x. Then as ∑ λj + √ µTd √ 2 ∑ j λ 2 j ≤ (1 + √ 2µT )dλ where λ = maxj λj is the largest eigenvalue of the
covariance Σ, we have: Pr ( ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 ) > 1− 1
µTd (5)
Then we can start proving Theorem 2.2.
Theorem. Consider logistic regression f(x) = 1 1+e−w⊤x with N Gaussian training samples x ∼ N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be
the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (6)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
Proof of Theorem 2.2. For logistic regression f(x) = 1 1+e−w⊤x with loss function
L = 1 N N∑ i=1 ℓi = 1 N N∑ i=1 −yi log ŷi − (1− yi) log(1− ŷi) (7)
Where ŷi is the predicted value. The gradient incurred training on the selected subset is then:
∂Lκ ∂w = 1 N N∑ i=1 (ŷi − yi)xi · I(|w⊤xi| < κ)
For those |w⊤xi| ≥ κ or "easy" samples, we have | sgn(yi − 12 ) ·w ⊤xi| ≥ κ and with probability at least 1− 1µTd ∥∥∥∥ ∂ℓi∂w ∥∥∥∥ 2 ≤ { E·T 1 4 1+eκ if sgn(yi − 1 2 ) ·w ⊤xi ≥ κ E · T 14 if sgn(yi − 12 ) ·w ⊤xi ≤ −κ (8) where E = √ dλ(1 + (2µ)
1 4 ). Note that the condition sgn(yi − 12 ) · w ⊤xi ≤ −κ means xi is misclassified by w as well as the margin is at least κ. Denote the portion of this kind of misclassified sample in the whole training set by r, we have the estimate of the gradient gap
Errt = ∥∥∥∥∂Lκ∂w − ∂L∂w ∥∥∥∥ 2 = 1 N ∥∥∥∥∥∥ ∑
|w⊤x|≥κ
∂ℓ
∂w (x) ∥∥∥∥∥∥ 2
≤ET 1 4 (1− γt) 1 + eκt + ET 1 4 (1− γt)rt
(9)
Where γt is the fraction of data kept by selecting with margin κt. The inequality holds with probability at least (1− 1µTd ) N > 1− αµT because of Equation 8.
Note that Lemma 1 also suggest ∥∥ ∂ℓ ∂w ∥∥ 2 ≤ E · T 14 with large probability, therefore L is highly likely to be Lipschitz continuous with parameter ET 1 4 . By setting a constant learning rate η = DN
E √ T , and critical margin κt = (1+ε) log(ζT −t), ζ > 1, we have with probability at least ( 1− αµT )T ≥ 1− αµ min t L(wt)− L(w∗) ≤ DE
NT 1 4
+ D
T T−1∑ t=1 Errt
≤ DE NT 1 4 + DE T 3 4 T−1∑ t=1
1
(ζT − t)1+ε +
DE
T 3 4 T−1∑ t=1 rt
≤ DE T 1 4
( 1
N +
cε,ζ
T ε √ T
) + DE
T 3 4 T−1∑ t=1 rt
(10)
The first inequality follows the Theorem 1 in (Killamsetty et al., 2021). The last inequality holds because ∑T−1 t=1 1 (ζT−t)1+ε ≤ ∫ ζT (ζ−1)T 1 s1+ε ds ≤ cε,ζ T ε with cε,ζ = 1 ε(ζ−1)ε ,∀ε > 0 and ζ > 1.
To bound the sum of classification error (the last term of Equation 10), again we utilize the data distribution prior. Note that the data points contribute to r are quantified by the following set:
E = {w⊤o x > 0 ∧w⊤x < −κ} ∪ {w⊤o x < 0 ∧w⊤x > κ} := E1 ∪ E2
where wo is the oracle classifier such that the true label is generated according to y = sgn(w⊤o x). Let ϕ represent the probability density function of standard Gaussian, we see that
r = ∫ E ϕ(x|Σ)dx = 2 ∫ E1 ϕ(x|Σ)dx
≤2 ∫ {w⊤·x<−κ} ϕ(x|Σ)dx = 2Φ ( − κ√ w⊤Σw ) ≤2Φ ( − κ
D √ λ
)
where λ is the largest eigenvalue of Σ. Therefore, we have the following estimation:
1
T 3 4 T−1∑ t=1 rt ≤ 1 T 3 4 T−1∑ t=1 2Φ ( − κt D √ λ )
≤ 2 T 3 4 T−1∑ t=1 ϕ(κt/(D √ λ)) κt/(D √ λ)
(Gaussian upper tail bound)
= 2D √ λ√
2π(1 + ε)
1
T 3 4 T−1∑ t=1
1
log(ζT − t) e−
(1+ε)2 2D2λ log2(ζT−t)
≤ 2D √ λT 1 4
√ 2π(1 + ε)
1 log((ζ − 1)T + 1) 1
((ζ − 1)T + 1) (1+ε)2 2D2λ log((ζ−1)T+1)
≤ cε,ζ,λT−β
(11)
where β = (1+ε) 2
2D2λ − 1 4 and we assume log((ζ − 1)T + 1) = Ω(1) with respect to T . Together we
prove the theorem 2.2.
C GENERALIZATION
Sorscher et al. (2022) analysed the generalization of static training scheme in the teacher-student perceptron setting, where the teacher is an "oracle" generating labels. For the training set T = {xi, yi}|T |i=1, assume xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o xi ) for all i. Without loss of generality, the oracle is assumed to be drawn form a sphere. Sorscher et al. (2022) works in a high dimensional statistics where |T |, d → ∞ but the ratio α = |T |/d remains O(1). Following the static training scheme, a lower fidelity estimator westimate which has angle θ relative to the oracle wo is used to evaluate the candidate instances, and those with smaller classification margin |w⊤estimatexi| along the estimator westimate are picked. The selection results in a subset S. S follows p(z), a truncated Gaussian distribution along westimate, while the other directions are still kept isotropic. More specifically, given a keeping ratio γ, the corresponding selection margin is
κ = H−1 ( 1−γ 2 ) and thus the subset distribution along westimate is p(z) = e −z2/2 √ 2πγ
Θ(κ−|z|), where Θ(x) is the Heaviside function and H(x) = 1 − Φ(x) where Φ(x) is the cumulative distribution function (CDF) of standard Gaussian.
The generalization error of the model trained on the subset S takes the form E(α, γ, θ). That is, the error is determined by γ the keeping ratio, α which indicates the abundance of training samples before selection, and θ which shows the closeness of the estimator to the oracle model. The full set of self-consistent equations characterizing E(α, γ, θ) is given as
R− ρ cos θ sin2 θ = α πΛ 〈∫ ν −∞ dτ exp ( −∆(τ, z) 2Λ2 ) (ν − τ) 〉 z
1− ρ 2 +R2 − 2ρR cos θ
sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ ) (ν − t)2 〉 z
ρ−R cos θ sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ )( z − ρτ 1− ρ2 ) (ν − τ)
+ 1
2πΛ exp
( −∆(τ, z)
2Λ2 )( ρR− cos θ 1− ρ2 ) (ν − τ) 〉 z
(12) Where,
Λ = √ sin2 θ −R2 − ρ2 + 2ρR cos θ
Γ(t, z) = z(ρR− cos θ)− τ(R− ρ cos θ) ∆(t, z) = z2 ( ρ2 + cos2 θ − 2ρR cos θ ) + 2τz(R cos θ − ρ) + τ2 sin2 θ
τ is an auxiliary field introduced by Hubbard-Stratonovich transformation. ⟨·⟩z denotes expectation on p(z). By solving these equations the generalization error can be easily read off as E = cos−1(R)/π, where R = w
⊤wo ∥w∥2·∥wo∥ .
D MORE RESULTS ON GENERALIZATION
Er ro
r
Er ro
r
To better understand the generalization under classification margin selection E(α, γ, θ), we provide more results to individually inspect the effect of (on average) select ratio γavg, initial data abundance α and the closeness of the estimator to the oracle mode θ. As shown in Figure 6(a), we changed γavg from 60% to 50%, thus constructing a smaller selection budget case. In Figure 6(b), we use α = 2.1 instead of α = 3.2 to construct a less abundant data case, where the data before selection is insufficient. In Figure 6(c), we start selecting samples using a better estimator θ = 30◦ instead of θ = 40◦. All the other hyper-parameters aside from the inspected one are kept consistent to those used Figure 2(b), that is, γavg = 0.6, α = 3.2 and θ = 40◦. We see that with various γavg and θ,
DynaMS outperforms its static counterpart. The abundance of initial data, however, significantly
affects. When data is insufficient, data selection, both static as well as dynamic cause obvious performance degradation. Figure 7 shows a even more serious α = 1.7, the generalization landscape is significantly changed and data selection is not recommended in this case.
E COMPARISON WITH STANDARD DEVIATION
We test each method in Table 4 and Table 5 5 times. The averaged accuracy and standard deviation are reported below in Table 6 and Table 7.
F IMPLEMENTATION DETAILS AND HYPER-PARAMETERS
Subset size schedule Dynamic Selection admits more freedom in subset size schedule. In the experiments we consider the linear schedule and the power schedule. For linear schedule, the keeping ratio is determined by γk = 1− k · a for k = 1, 2, . . . ,K, where a determines the sample reduction
ratio. γ is supposed to satisfy γavg = 1K ∑K
k=1 γk = γs where γs is the selection ratio when a static training scheme is applied. Thus 1K ∑K k=1 |Tk| = |S|, meaning the averaged number of data used in the dynamic scheme is kept equal to that of static training.
Aside from the linear scheduler, we also explore a power schedule where γk = m · k−r + b for k = 1, 2, . . . ,K. Power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. Determining these hyper-parameters m, r, b is a bit tricky, we just require γ1 = 1.0 to warm start and γavg = 1K ∑K k=1 γk = γs for fair comparison. γK should not be overly small, we empirically find γK ≈ γ − 0.1 yield good results. For different budget γs = {0.6, 0.7, 0.8, 0.9} the hyper-parameters are given in Appendix F, Table 8. Post process is carried out to make sure the resulting subset size sequence satisfy the above requirements.
(Killamsetty et al., 2021) utilize a constant schedule, where in each selection the subset size is kept constant as γs · |T |. This schedule however, do not admit selection without replacement. Linear and power schedule are all monotonically decreasing, thus are natural choices considering this. Figure 8 plots the three schedules on γs = 0.6 budget. In this paper we just provide a primary exploration on the subset size schedule, in depth study on the relationship between the subset size and the model performance as well as an automatic way determining the optimal subset size schedule is left for future work.
Hyper-parameters Finally, the detailed hyper-parameters for DynaMS on both CIFAR-10 and ImageNet datasets are shown in Table 8. Note that for DynaMS+PSP, the Max Epochs is set to be 90 on ImageNet.
Table 8: Hyper-parameters of DynaMS for different models on CIFAR-10 and ImageNet.
Hyper-parameters CIFAR-10 ImageNet
ResNet-18 ResNet-18 ResNet-50
Batch Size 128 512 512 Init. Learning Rate of W 0.1 0.1 0.1 Learning Rate Decay Stepwise 0.2 Stepwise 0.1 Stepwise 0.1 Lr Decay milestones {60,120,160} {40,80} {40,80} Optimizer SGD SGD SGD Momentum 0.9 0.9 0.9 Nestrov True True True Weight Decay 5e-4 1e-4 1e-4 Max Epochs 200 120 120 Selection interval 10 10 10
Power Scheduler -
60%: m = 0.3984, r = 0.2371, b = 0.2895 70%: m = 0.3476, r = 0.2300, b = 0.4275 80%: m = 0.3532, r = 0.1349, b = 0.4978 90%: m = 0.2176, r = 0.1035, b = 0.7078
Linear Scheduler
a = 0.041 60%: a = 0.073 - 70%: a = 0.055 - 80%: a = 0.036 - 90%: a = 0.018
G VISUALIZATION OF DYNAMICALLY SELECTED IMAGES
To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. For k = 1, k = 4, k = 7 and k = 10, which corresponds to the 1,4,7 and 10th selection, we randomly visualize selected samples that are absent in the latter selection. E.g. the k = 4 row shows images picked in the 4th selection but not in the 7th selection. From Figure 9, we see that in the early selections, amounts of easy-to-recognize samples are kept. As the training proceeds, these simple images are screened out and the model focuses more on harder samples that are atypical, blurred, or with interfering objects, validating our hypothesis that samples most informative change as the model evolves. Dynamic selection is thus indispensable.
H SUMMARY OF NOTATIONS
Models and Parameters f(·) The model used for classification w Parameters of the model w∗ Optimal model parameter wo Oracle model parameter W Parameter of the linear classifier W Kernel of a convolutional layers g gradient incurred by the model gproxy gradient incurred by the proxy d The dimension of data feature h(·) Feature extractor part of the model f(·) p Slimming factor, deciding the width of the proxy model
Selection schedule a Sample reduction ratio in the linear schedule m, r, b Hyper-parameters controlling the power schedule
Loss Functions L Generic reference to the loss function Data Selection B Decision boundary of linear classifiers
Q Selection interval M The classification margin aka. distance of a sample to decision boundary γk Selection budget, keep ratio of samples for the kth selection γavg The averaged keep ratio of dynamic selection γs Selection budget in static selection. k Selection step K The total number of selections along training E The generalization error of model trained on selected subset θ Relative angle of a model to the oracle model. α Aboundance of data before selection κ Selection margin.
Train t Training epoch T The total number of training epochs, T = Q · (K + 1)
Data Distribution Σ Covariance of a Gaussian distribution λ The largest eigenvalue of the covariance matrix
Hyper-parameters D Upper bound of model parameter norm ε, ζ, µ Constants appear in the convergence bound. | 1. What is the focus and contribution of the paper on dynamic margin selection?
2. What are the strengths of the proposed approach, particularly in terms of its ability to construct a training subset?
3. What are the weaknesses of the paper, especially regarding its comparisons with other methods and the performance of its proposed approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces a dynamic margin selection (DynaMS) method to dynamically construct the training subset by utilizing the distance from candidate samples to the classification boundary. In addition, a light parameter sharing proxy is designed to reduce the additional computation incurred by the selection. Extensive analysis and experiments demonstrate the superiority of the proposed approach in data selection.
Strengths And Weaknesses
Strength:
A dynamic margin selection (DynaMS) method is proposed to dynamically construct the training subset by utilizing the distance from candidate samples to the classification boundary.
Extensive analysis and experiments are conducted to show the performance of the proposed method.
Weaknesses:
The paper claims that the existing sample selection methods are expensive both in run-time and in memory efficiency, however, the paper lacks comparisons with different methods in run-time and memory.
In Table 3, the proposed DynaMS does not perform better than Stat.-based methods, and performs slightly better than Dyna.-based method such as CRAIG and GradMatch.
In Table 4, the results of DynaMS+PSP are not competitive, so the contribution of PSP is limited.
Clarity, Quality, Novelty And Reproducibility
The quality and clarity is good, and the originality of the work is fair. |
ICLR | Title
DynaMS: Dyanmic Margin Selection for Efficient Deep Learning
Abstract
The great success of deep learning is largely driven by training over-parameterized models on massive datasets. To avoid excessive computation, extracting and training only on the most informative subset is drawing increasing attention. Nevertheless, it is still an open question how to select such a subset on which the model trained generalizes on par with the full data. In this paper, we propose dynamic margin selection (DynaMS). DynaMS leverages the distance from candidate samples to the classification boundary to construct the subset, and the subset is dynamically updated during model training. We show that DynaMS converges with large probability, and for the first time show both in theory and practice that dynamically updating the subset can result in better generalization. To reduce the additional computation incurred by the selection, a light parameter sharing proxy (PSP) is designed. PSP is able to faithfully evaluate instances following the underlying model, which is necessary for dynamic selection. Extensive analysis and experiments demonstrate the superiority of the proposed approach in data selection against many state-of-the-art counterparts on benchmark datasets.
1 INTRODUCTION
Deep learning has achieved great success owing in part to the availability of huge amounts of data. Learning with such massive data, however, requires clusters of GPUs, special accelerators, and excessive training time. Recent works suggest that eliminating non-essential data presents promising opportunities for efficiency. It is found that a small portion of training samples 1 contributes a majority of the loss (Katharopoulos & Fleuret, 2018; Jiang et al., 2019), so redundant samples can be left out without sacrificing much performance. Besides, the power law nature (Hestness et al., 2017; Kaplan et al., 2020) of model performance with respect to the data volume indicates that loss incurred by data selection can be tiny when the dataset is sufficiently large. In this sense, selecting only the most informative samples can result in better trade-off between efficiency and accuracy.
The first and foremost question for data selection is about the selection strategy. That is, how to efficiently pick training instances that benefit model training most. Various principles have been proposed, including picking samples that incur larger loss or gradient norm (Paul et al., 2021; Coleman et al., 2020), selecting those most likely to be forgotten during training, as well as utilizing subsets that best approximate the full loss (Feldman, 2020) or gradient (Mirzasoleiman et al., 2020; Killamsetty et al., 2021). Aside from selection strategies, existing approaches vary in the training schemes which can be divided roughly into two categories: static ones and dynamic (or adaptive) ones. Static methods (Paul et al., 2021; Coleman et al., 2020; Toneva et al., 2019) decouple the subset selection and the model training, where the subset is constructed ahead and the model is trained on such a fixed subset. Dynamic methods (Mindermann et al., 2022; Killamsetty et al., 2021), however, update the subset in conjunction with the training process. Though effectively eliminates amounts of samples, it is still not well understood how these different training schemes influence the final model.
∗Corresponding author 1We use the terms data, sample, and instance interchangeably
In this paper, we propose dynamic margin selection (DynaMS). For the selection strategy, we inquire the classification margin, namely, the distance to the decision boundary. Intuitively, samples close to the decision boundary influence more and are thus selected. Classification margin explicitly utilizes the observation that the decision boundary is mainly determined by a subset of the data. For the training scheme, we show the subset that benefits training most varies as the model evolves during training, static selection paradigm may be sub-optimal, thus dynamic selection is a better choice. Synergistically integrating classification margin selection and dynamic training, DynaMS is able to converge to the optimal solution with large probability. Moreover, DynaMS admits theoretical generalization analysis. Through the lens of generalization analysis, we show that by catching the training dynamics and progressively improving the subset selected, DynaMS enjoys better generalization compared to its static counterpart.
Though training on subsets greatly reduces the training computaiton, the overhead introduced by data evaluation undermines its significance. Previous works resort to a lighter proxy model. Utilizing a separate proxy (Coleman et al., 2020), however, is insufficient for dynamic selection, where the proxy is supposed to be able to agilely adapt to model changes. We thus propose parameter sharing proxy (PSP), where the proxy is constructed by multiplexing part of the underlying model parameters. As parameters are shared all along training, the proxy can acutely keep up with the underlying model. To train the shared network, we utilize slimmable training (Yu et al., 2019) with which a well-performing PSP and the underlying model can be obtained in just one single train. PSP is especially demanding for extremely large-scale, hard problems. For massive training data, screening informative subset with a light proxy can be much more efficient. For hard problems where model evolves rapidly, PSP timely updates the informative subset, maximally retaining the model utility.
Extensive experiments are conducted on benchmarks CIFAR-10 and ImageNet. The results show that our proposed DynaMS effectively pick informative subsets, outperforming a number of competitive baselines. Note that though primarily designed for supervised learning tasks, DynaMS is widely applicable as classifiers have become an integral part of many applications including foundation model training (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021; Chen et al., 2020), where hundreds of millions of data are consumed.
In summary, the contributions of this paper are three-folds:
• We establish dynamic margin select (DynaMS), which selects informative subset dynamically according to the classification margin to accelerate the training process. DynaMS converges to its optimal solution with large probability and enjoys better generalization.
• We explore constructing a proxy by multiplexing the underlying model parameters. The resulting efficient PSP is able to agilely keep up with the model all along the training, thus fulfill the requirement of dynamic selection.
• Extensive experiments and ablation studies demonstrate the effectiveness of DynaMS and its superiority over a set of competitive data selection methods.
2 METHODOLOGY
To accelerate training, we propose dynamic margin selection (DynaMS) whose framework is presented in Figure 1. Instances closest to the classification decision boundary are selected for training, and the resulting strategy is named margin selection (MS). We show that the most informative subset changes as the learning proceeds, so that a dynamic selection scheme that progressively improves the subset can result in better generalization. Considering the computational overhead incurred by selection, we then explore parameter sharing proxy (PSP), which utilizes a much lighter proxy model to evaluate samples. PSP is able to faithfully keep up with the underlying model in the dynamics selection scheme. The notations used in this paper are summarized in Appendix H
2.1 SELECTION WITH CLASSIFICATION MARGIN
Given a large training set T = {xi, yi}|T |i=1, data selection extracts the most informative subset S ⊂ T trained on which the model f(x) yields minimal performance degradation. Towards this end, we utilize the classification margin, that is, the distance to the decision boundary, to evaluate the informativeness of each sample. |S| examples with the smallest classification margin are selected.
Intuitively, these samples should be influential most to the model decision. Following (Mickisch et al., 2020; Emam et al., 2021), the decision boundary between two classes c1 and c2 ∈ {1, . . . C} is B := {x | fc1(x) = fc2(x)}, where fc(x) is the c entry of model output, indicating the probability of x belonging to class c. The classification margin is then:
M(x, c1, c2) = min δ
∥δ∥2 s.t. x+ δ ∈ B (1)
which is the minimal perturbation required to move x form c1 to c2. Directly computing the margin is infeasible for deep neural networks, so scoring is conducted in the feature space instead as in (Emam et al., 2021). Typically neural networks applies a linear classifier on top of the features (Goodfellow et al., 2016), so the classification margin M(x, c1, c2) can be easily obtained as: M(x, c1, c2) = (Wc1 −Wc2)⊤h(x)/ ∥Wc1 −Wc2∥2, where W ∈ Rd×C is the weight of the linear classifier 2 and h(x) is the feature of x. In this way, the classification margin of a labeled sample (x, y) along class c is M(x, y, c) if y ̸= c or minc̸̃=y M(x, y, c̃) if y = c. The former indicates the distance moving (x, y) to class c while the latter is the distance moving (x, y) to the nearest class other than y. To keep the subset balanced, we evenly pick |S|/C samples with the smallest classification margin along each class. The resulting strategy is named margin selection (MS), denoted as MS(w, T , |S|). The procedure is detailed in Algorithm 1 in Appendix A.
2.2 DYNAMIC SELECTION
Given the subset selected, model is subsequently trained on S. Conventional static training scheme assumes that the optimal subset converges and is not related to the model training dynamic (Paul et al., 2021; Coleman et al., 2020). Though effectively eliminate instances, the "converged optimal
2Without loss of generality, we omit the bias term for notation clarity
subset" assumption may be too strong. To investigate whether the most informative samples vary during training, we plot the overlap ratio of samples selected in two consecutive selections during the training of ResNet models, shown in Figure 2(a). We train for 200 epochs and 120 epochs on CIFAR-10 and ImageNet respectively, and conduct selection every 10 epochs. It can be observed that the overlap ratio is on average 0.83 for CIFAR-10 and 0.73 for ImageNet rather than 1.0, meaning that samples that most benefit model training vary as the model evolves. A fixed subset may be outdated after parameter updates, thus yielding sub-optimal results.
We thus resort to a dynamic scheme where data selection is performed after each Q epochs training 3. By selecting in conjunction with training, the informative subset gets updated according to the current model status. For the kth selection, the informative subset Sk is constructed by picking portion γk samples so that |Sk| = γk|T |. The selection ratio γk determines the critical margin κk, where only samples with classification margin smaller than κk are kept. Sk will then be used for training Q epochs. In the following, we provide a convergence analysis of DynaMS and show that DynaMS achieves better generalization by constantly improving the selected subset.
Convergence Analysis We now study the conditions for the convergence of training loss achieved by DynaMS. We use logistic regression (LR) to demonstrate and then show the conditions are well satisfied when LR is used on top of deep feature extractors. We have the following theorem: Theorem. Consider logistic regression f(x) = 1
1+e−w⊤x with N Gaussian training samples x ∼
N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (2)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
The proof is left in Appendix B. Theorem 2.2 indicates that dynamically selecting data based on the classification margin is able to converge and achieve the optima w∗ with large probability. The Gaussian input assumption is overly strong in general, but when the linear classifier is adopted on top of a wide enough feature extractor, the condition is well satisfied because a infinitely wide neural network resembles Gaussian process (Lee et al., 2019; Xiao et al., 2018; de G. Matthews et al., 2018).
Generalization Analysis Recently, (Sorscher et al., 2022) developed an analytic theory for data selection. Assume training data xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o · xi ) . Following static selection, when an estimator w is used to pick samples that have a small classification margin, the generalization error takes the form E(α, γ, θ) in the high dimensional limit. α = |T |d indicates the abundance of training samples before selection; γ determines the selection budget and θ = arccos ( w⊤wo
∥w∥2·∥wo∥
) shows the closeness of the
estimator to the oracle. The full set of self-consistent equations characterizing E(α, γ, θ) is given in Appendix C. By solving these equations the generalization error E(α, γ, θ) can be obtained. We then extend it to the dynamic scheme. For the kth selection, we use the model trained on Sk−1 as the estimator wk, which deviates from oracle by angle θk = arccos ( w⊤k−1wo
∥wk−1∥2·∥wo∥2
) , to evaluate
and select samples. The resulting subset Sk will be used for subsequent training of model wk+1, which will later be used as an estimator at k + 1 to produce Sk+1. In this way, generalization of dynamic scheme can be obtained by recurrently solving the equations characterizing E(α, γk, θk) with updated keeping ratio γk and estimator deviation θk. Note that in each round of selection, samples are picked with replacement, so the abundance of training samples α is kept fixed. The keeping ratio γk, determining the subset size, can be scheduled freely to meet various requirements.
3For extremely large dataset case where training can be accomplished within just one or a few epochs, the selection can be performed every Q iterations
We compare the generalization of dynamic selection and its static counterpart in Figure 2(b). We show the landscape of E(α, γ, θ) with different γ and θ by solving the generalization equations numerically. α = 3.2 is kept fixed, which means the initial training data is abundant; We use static training with θs = 40◦ and γs = 0.6 as control group. To make the comparison fair, we make sure 1 K ∑K k=1 |γk| = γs, so that the averaged number of samples used in the dynamic scheme equals the subset size used in the static scheme. From Figure 2(b), we see that in dynamic selection, the estimator gets constantly improved (θk decreases), so that the subsets get refined and the model achieves better generalization. Discussion on selecting with different α, γand θ is given in Appendix D.
2.3 PARAMETER SHARING PROXY
With dynamic selection, the number of updates is reduced. However, the computational overhead incurred by data selection undermines its significance, especially when the model is complex and samples are evaluated frequently. Aside from designing efficient selection strategies, previous works explored utilizing a lighter model as proxy to evaluate the instances so that the problem can be ameliorated. Pretrain a separate proxy and evaluate instances prior to model training (Coleman et al., 2020), however, is insufficient for dynamic selection, as a static proxy can not catch the dynamics of the underlying model. A proxy that fulfills the requirements of dynamic selection is still absent.
We thus propose parameter sharing proxy (PSP), where part of the model is used as the proxy. Taking convolutional neural network as an example, for a layer with kernel W ∈ Rci×co×u×u, where ci, co and u are number of input filters, number of output filters and kernel size respectively, the corresponding kernel of proxy is then: Wproxy = W1:pci,1:pco,:,:, where p ∈ [0, 1] is a slimming factor. As shown in Figure 3, the proxy kernel is constructed with the first pci input channels and first pco output channels. A p times thinner proxy can be obtained by applying p to each layer.
With separate batch normalization for proxy and model, PSP forms a slimmable network (Yu et al., 2019), where multiple models of different widths are jointly trained and they all yield good performance. As the parameters are shared, the proxy can acutely keep up with the model change, thus applicable for dynamic selection. We further investigate the gradients alignment of the proxy and the original model through their cosine similarity:
cos(g, gproxy) = g⊤gproxy
∥g∥2 · ∥g∥2 , where g = ∇WL (W) , gproxy = ∇WL (Wproxy) (3)
A positive cosine value indicates gproxy stands in the same side with g, thus updates on proxy and the model benefits each other. We compare the gradient alignment of PSP and a stand-alone proxy in Figure 2(c) on ResNet-50. With p = 0.5, we see that cos(g, gproxy) for PSP is much larger than the stand-alone proxy. Given the well-aligned gradients, PSP requires fewer training epochs. Overall workflows of DynaMS and DynaMS+PSP is shown in Algorithm 2 and Algorithm 3 of Appendix A. PSP is especially advantageous for large and hard problems. When the data is extremely large, training PSP on a small subset is cheaper than evaluating the extremely large training set with the original model, making it much more efficient. When the task is hard and model changes rapidly during training, PSP can timely updates the informative subset, maximally retaining the model utility.
3 RELATED WORK
Accelerating training by eliminating redundant training instances has long been a research focus in academia. This is accomplished by adopting an effective selection strategy and an appropriate training scheme. We summarize the related literature from these two strands of research in the following.
Selection Strategy Sample selection can be accomplished with various principles. (Loshchilov & Hutter, 2015; Jiang et al., 2019; Paul et al., 2021) tend to pick samples that incur large loss or gradient norm (CE-loss, EL2N, GraNd). (Toneva et al., 2019) inspects the “unforgettable”
examples that are rarely misclassified once learned, and believes these samples can be omitted without much performance degradation. Other works adopt uncertainty. Samples with the least prediction confidence are preferred (Settles, 2010). Recently, (Mirzasoleiman et al., 2020; Killamsetty et al., 2021) select subset that best covers or approximates the full gradient (Craig, GradMatch). However, these requires per-sample gradient as well as an additional optimization which is expensive both in run-time and in memory. Our work utilizes the classification margin to identify informative samples,
which is efficient and can synergistically adapt to various training schemes. Comparison of these strategies is given in Table 1, where d is the dimension of data feature. MS is slightly slower than selection via loss (CE-loss and EL2N), but much more efficient than Craig and GradMatch. Here we consider only the complexity of the selection strategy itself, time spent for feature extraction is not included. Classification margin has been previously explored in the active learning literature (Ducoffe & Precioso, 2018; Emam et al., 2021), here we utilize it for training acceleration.
Training Schemes Data selection brings more options to training. Under the conventional static training scheme (Paul et al., 2021; Toneva et al., 2019; Coleman et al., 2020), data selection is conducted prior to model update, and the informative subset is kept fixed. Contrastively, online batch selection picks batch data each iteration (Loshchilov & Hutter, 2015; Alain et al., 2015; Zhang et al., 2019; Mindermann et al., 2022). Though sufficiently considered the training dynamics, the overly frequent sample evaluation incurs prohibitive computational overhead. Recently, (Killamsetty et al., 2021) tried selecting after several epochs’ training, which is similar to our dynamic scheme. However, the dynamic training scheme is just utilized as a compromise to avoid overly frequent selection. A formal analysis of its advantage over the static scheme is absent.
By systematically considering the selection strategy, the model training, as well as the proxy design, Our proposed DynaMS forms an effective data selection framework for efficient training.
4 EXPERIMENTS
In this section, we first analyse the effectiveness of each design ingredient in Section 4.2. Then we compare to state-of-the-art algorithms in Section 4.3. Code is available at https://github. com/ylfzr/DynaMS-subset-selection.
4.1 EXPERIMENTAL SETUP
We conduct experiments on CIFAR-10 Krizhevsky & Hinton (2009) and ImageNet Jia et al. (2009), following standard data pre-processing in He et al. (2016). A brief summarization of the experimental setup is introduced below, while complete hyper-parameter settings and implementation details can be found in Appendix F.
CIFAR-10 Experiments For CIFAR-10, we train ResNet-18 (He et al., 2016) for 200 epochs. Selection is conducted every 10 epochs, so overall there are 19 selections (K = 19). For subset size, we adopt a simple linear schedule: γk = 1− k · a for k = 1, . . . ,K, where a determines the reduction ratio. We make sure γavg = 1K ∑K k=1 γk = γs. In this way, the averaged number of data used in the dynamic scheme (γavg) is kept equal to that of static training (γs) for fair comparison. For 0.6× acceleration, a = 0.042. We conduct experiments on a NVIDIA Ampere A-100.
ImageNet Experiments For ImageNet, we choose ResNet-18 and ResNet-50 as base models. Following the conventions, the total training epoch is 120. Selection is also conducted every 10 epochs, so altogether K = 11. For subset size, aside from the linear schedule, we also explore a power schedule where γk decays following a power law: γk = m · k−r + b for k = 1, 2, . . . ,K. For 0.6× acceleration, we set m = 0.398, r = 0.237 and b = 0.290. Please see Appendix F for
more details. The power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. We conduct experiments on four NVIDIA Ampere A-100s.
4.2 ABLATION STUDIES
We use ResNet-50 on ImageNet to illustrate the effect of each ingredient in DynaMS, that is, the classification margin criteria, the dynamic training scheme as well as the parameter sharing proxy.
The effect of classification margin selection To inspect the effect of classification margin selection (MS), we compare MS against two widely applied selection strategies CE-loss (Loshchilov & Hutter, 2015; Jiang et al., 2019) and EL2N (Paul et al., 2021). CE-loss selects samples explicitly through the cross-entropy loss they incur while EL2N picks samples that incur large L2 error. We compare the three under the conventional static scheme so any other factors aside from the selection strategy is excluded. Samples are evaluated after 20 epochs of pretraining. The model is then reinitialized and trained on the selected subset, which contains 60% original samples. As shown in Table 2, MS achieves the best accuracy among the three, validating its effectiveness.
The effect of dynamic training We then apply dynamic selection on MS, where the average subset size is also kept to be 60% of the original dataset. From Table 2 we see that DynaMS outperforms MS by 1.67%, which is significant on large scale dataset like ImageNet. The superiority of DynaMS validates that by constantly improving the model and updating the subset, dynamic selection scheme can result in better performance. Note that DynaMS can be more practical since it does not require the 20 epochs training prior to selection as required in the static scheme.
The effect of parameter sharing proxy We now study the parameter sharing proxy (PSP). An effective proxy is supposed to be faithful, and can agilely adapt to model updates. In Figure 4, we plot the Spearman rank correlation as well as the overlap ratio of samples selected with the proxy and the model. We see that all along the training, the rank correlation is around 0.68, and over 78% samples selected are the same, indicating that the proxy and the model are fairly consistent. We then investigate how will the complexity, measured by floating point operations (FLOPs), of proxy affect. We enumerate over the slimming factor p ∈ {0.25, 0.5, 0.75, 1.0} to construct proxies of different widths, the corresponding FLOPs are 6.25%, 25.00%, 56.25%, 100% respectively. In Table 3, we see that significant computation reduction can be achieved with moderate performance degradation.
4.3 COMPARISONS WITH STATE-OF-THE-ARTS
Finally, we compare DynaMS against various state-of-the-art methods. Aside from CE-loss and EL2N, Random picks samples uniformly at random. GraNd (Paul et al., 2021) select samples that incur large gradient norm. Forget (Toneva et al., 2019) counts how many times a sample is mis-classified (forget) after it is learned. Samples more frequently forgotten are preferred. We evaluate the forget score after 60 epochs training. To avoid noisy evaluation, many of these static selection approaches ensembles networks before selection. The number of ensambled models is given by the subscription. Auto-assist (Zhang et al., 2019) select samples that incur large loss value on a small proxy. Selection is conducted in each iteration thus forming an online batch selection (OLBS) scheme. DynaCE and DynaRandom apply the corresponding selection strategy, but are trained in a dynamic way. CRAIG and GradMatch propose to reweight and select subsets so that they best cover or approximate the full gradient. In the experiments, we use the per-batch variant of CRAIG and
Figure 4: Correlation of proxy and model.
20 40 60 80
0.66
0.68
0.70
0.72
Sp ea
rm an
C or
re la
ti on
20 40 60 80 Epochs
0.76
0.77
0.78
0.79
0.80
O ve
rl ap
R at
io
GradMatch proposed in (Killamsetty et al., 2021) with 10 epoch warm start 4. The two approaches utilize dynamic selection scheme, all the training settings are kept the same as our DynaMS.
In table 4, average accuracy from 5 runs on CIFAR-10 as well as their running time are reported. Due to limited space, the standard deviation is given in Appendix E. We see that DynaMS achieves comparable performance against the strongest baselines (EL2N10, GraNd10, Forget10) while being more efficient. Note that the static methods require pretraining one or several models for 20 epochs before selection. Considering this cost (subscript of the reported running time), the acceleration of these methods is less significant. We also compare two online batch selection methods, OnlineMS and Auto-assist (Zhang et al., 2019). OnlineMS picks samples with MS, but the selection is conducted each iteration. OnlineMS didn’t outperform DynaMS, meaning more frequent selection is not necessary. Rather, selecting at each optimization step incurs prohibitive computational overhead. Auto-assist didn’t get good performance in this experiment. This may results from the overly simple proxy. The logistic regression proxy adopted may not sufficiently evaluate the candidate samples.
4For cifar-10, we use the published implementation from https: //github.com/decile-team/cords. For ImageNet, we modify the implementation to the distributed setting.
For ImageNet, we also report the average accuracy from 5 runs as well as their running time. The standard deviation is given in Appendix E DynaMS outperforms all the baselines. For instance, it achieves 68.65% and 74.56% top-1 accuracy given on average 60% samples for ResNet-18 and ResNet-50 respectively, surpassing the most competitive counterpart Forget by 0.81% and 1.06%. Compared to the static methods which require additional pretraining, 60 epochs for Forget and 20 for the others, DynaMS is much more efficient. CRAIG and GradMatch didn’t get good performance on ImageNet. This might because we use the per-batch variant in (Killamsetty et al., 2021), and set batch size 512 in order to fit the per-sample gradients into memory. The per-batch variant treats each mini-batch as one sample and selects mini-batches during the gradient matching process. So a larger batch size means more coarse grain selection which may lead to inferior performance. We also compare a variant DynaRandom. DynaRandom adopts the dynamic selection scheme but a random subset is constructed at each selection. DynaMS outperforms DynaRandom by 1.06% and 1.93% for ResNet-18 and ResNet-50 respectively, indicating that the superiority of DynaMS over static methods comes from effectively identifying informative samples instead of witnessing more data.
ResNet-50 is rather complex and the data evaluation time is non-negligible. We thus apply parameter sharing proxy to reduce the evaluation time. The proxy is 0.5× width so the evaluation requires around 0.25× computation compared to the original model. As the gradients of the proxy and the underlying model are well aligned, we only train DynaMS+PSP for 90 epochs. From table 5, though utilizing a proxy harms performance compared to DynaMS, it still outperforms all the other baselines. Specifically, SVP also uses a proxy for sample evaluation. The proxy, however, is a statically fully trained ResNet-18. The superiority of DynaMS+PSP over SVP shows the necessity of a dynamic proxy that agilely keeps up with the change of underlying model. The advantages of DynaMS+PSP over DynaMS on efficiency can be significant for extremely large scale problems where massive data is available while only a small fraction of data is sufficient for training. To further demonstrate DynaMS, we draw the accuracy curvature of ResNet-50 against different (on average) sample budgets from 60% to 100% in Figure 5. It can be found that our DynaMS consistently outperforms all the other data selection strategies on different budgets. Finally, To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. See G for more details.
5 CONCLUSION
In this paper, we propose DynaMS, a general dynamic data selection framework for efficient deep neural network training. DynaMS prefers samples that are close to the classification boundary and the selected "informative" subset is dynamically updated during the model training. DynaMS has a high probability to converge and we pioneer to show both in practice and theory that dynamic selection improves the generalization over previous approaches. Considering the additional computation incurred by selection, we further design a proxy available for dynamic selection. Extensive experiments and analysis are conducted to demonstrate the effectiveness of our strategy.
A APPENDIX
A ALGORITHM PROCEDURE
Algorithm 1 outlines the procedure of margin selection (MS). In MS, distances of the current sample (x, y) to each other class c are computed. If y ̸= c, the classification margin of (x, y) and class c is M(x, y, c), which is the distance of moving x from class y to class c. If y = c, the classification margin is minc̃ ̸=y M(x, y, c̃), which corresponds to the distance moving (x, y) to another class that is the most close to x. For the whole candidate set T , this generates a |T | × C score matrix. After the classification margins are obtained, |S|/C samples with the smallest classification margin along each class are picked. This keeps samples collected in the subset balanced.
Algorithm 1 Margin selection: MS(w, T , γ) Input:
Candidate set T , keeping ratio γ, number of classes C; Network with weights w, including weights of the final classification layer W ;
Output: Selected subset according to the classification margin S.
1: Compute the keeping budget |S| = γ · |T |, initialize the subset S = {} // Evaluating: compute the classification margin. 2: for (x, y) ∈ T do 3: for c = 1 : C do 4: Compute the classification margin of the sample to the (y, c) boundary:
M(x, y, c) =
{ minc̸̃=y M(x, y, c̃) y = c
M(x, y, c) y ̸= c (4)
5: end for 6: end for
// Selecting: pick the samples according to classification margin (Equation 4.) 7: for c = 1 : C do 8: Pick |S|/C samples which have the smallest classification margins (M(·)): Top|S|/C(c). 9: S = S ⋃ Top|S|/C(c)
10: Remove the already selected samples from the candidate set: T = T − Top|S|/C(c) 11: end for
Algorithm 2 Dynamic margin selection (DynaMS) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wt, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via stochastic gradient descent on Sk.
10: end for
Algorithm 3 Dynamic margin selection (DynaMS) with parameter sharing proxy (PSP) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q Slimming factor of the proxy r, thus the proxy weights Wproxy is determined.
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wtproxy, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via optimizing L(W) + L(Wproxy) on Sk. (Slimmable training)
10: end for
A full workflow of efficient training with the proposed dynamic margin selection (DynaMS) is shown in Algorithm 2. The model is first trained on the full dataset T for Q epochs to warm up. Subset selection kicks in each Q epochs, samples are evaluated with the current model so the informative subset gets updated according to the distance of samples to the classification boundary. After selection, the model is trained on the selected subset until the next selection. The workflow incorporating parameter sharing proxy is shown in Algorithm 3. Different from naive DynaMS, samples are evaluated and selected with the proxy instead of the underlying model. During the Q epochs’ training, the proxy and the original model are updated simultaneously with slimmable training (Yu et al., 2019).
B PROOF FOR THEOREM 2.2
To prove Theorem 2.2, we first inspect the norm of x. We get the following lemma.
Lemma 1. For Gaussian data x ∼ N (0,Σ), let µ > 0, T > 1 be constants, d the dimension of x and λ the largest eigenvalue of the covariance Σ, then with probability at least 1 − 1µTd , ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 .
Proof of Lemma 1. For x ∼ N (0,Σ), ∥x∥22 follows a generalized chi-squared distribution. The mean and variance can be computed explicitly as E[∥x∥22] = trΣ = ∑ j λj and Var(∥x∥ 2 2) =
2trΣ2 = 2 ∑
j λ 2 j . By Chebyshev’s inequality, we have
Pr ∥x∥22 <∑λj +√µTd√2∑ j λ2j > 1− 1 µTd
where µ > 0 and T > 1 are constants and d is the dimension of x. Then as ∑ λj + √ µTd √ 2 ∑ j λ 2 j ≤ (1 + √ 2µT )dλ where λ = maxj λj is the largest eigenvalue of the
covariance Σ, we have: Pr ( ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 ) > 1− 1
µTd (5)
Then we can start proving Theorem 2.2.
Theorem. Consider logistic regression f(x) = 1 1+e−w⊤x with N Gaussian training samples x ∼ N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be
the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (6)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
Proof of Theorem 2.2. For logistic regression f(x) = 1 1+e−w⊤x with loss function
L = 1 N N∑ i=1 ℓi = 1 N N∑ i=1 −yi log ŷi − (1− yi) log(1− ŷi) (7)
Where ŷi is the predicted value. The gradient incurred training on the selected subset is then:
∂Lκ ∂w = 1 N N∑ i=1 (ŷi − yi)xi · I(|w⊤xi| < κ)
For those |w⊤xi| ≥ κ or "easy" samples, we have | sgn(yi − 12 ) ·w ⊤xi| ≥ κ and with probability at least 1− 1µTd ∥∥∥∥ ∂ℓi∂w ∥∥∥∥ 2 ≤ { E·T 1 4 1+eκ if sgn(yi − 1 2 ) ·w ⊤xi ≥ κ E · T 14 if sgn(yi − 12 ) ·w ⊤xi ≤ −κ (8) where E = √ dλ(1 + (2µ)
1 4 ). Note that the condition sgn(yi − 12 ) · w ⊤xi ≤ −κ means xi is misclassified by w as well as the margin is at least κ. Denote the portion of this kind of misclassified sample in the whole training set by r, we have the estimate of the gradient gap
Errt = ∥∥∥∥∂Lκ∂w − ∂L∂w ∥∥∥∥ 2 = 1 N ∥∥∥∥∥∥ ∑
|w⊤x|≥κ
∂ℓ
∂w (x) ∥∥∥∥∥∥ 2
≤ET 1 4 (1− γt) 1 + eκt + ET 1 4 (1− γt)rt
(9)
Where γt is the fraction of data kept by selecting with margin κt. The inequality holds with probability at least (1− 1µTd ) N > 1− αµT because of Equation 8.
Note that Lemma 1 also suggest ∥∥ ∂ℓ ∂w ∥∥ 2 ≤ E · T 14 with large probability, therefore L is highly likely to be Lipschitz continuous with parameter ET 1 4 . By setting a constant learning rate η = DN
E √ T , and critical margin κt = (1+ε) log(ζT −t), ζ > 1, we have with probability at least ( 1− αµT )T ≥ 1− αµ min t L(wt)− L(w∗) ≤ DE
NT 1 4
+ D
T T−1∑ t=1 Errt
≤ DE NT 1 4 + DE T 3 4 T−1∑ t=1
1
(ζT − t)1+ε +
DE
T 3 4 T−1∑ t=1 rt
≤ DE T 1 4
( 1
N +
cε,ζ
T ε √ T
) + DE
T 3 4 T−1∑ t=1 rt
(10)
The first inequality follows the Theorem 1 in (Killamsetty et al., 2021). The last inequality holds because ∑T−1 t=1 1 (ζT−t)1+ε ≤ ∫ ζT (ζ−1)T 1 s1+ε ds ≤ cε,ζ T ε with cε,ζ = 1 ε(ζ−1)ε ,∀ε > 0 and ζ > 1.
To bound the sum of classification error (the last term of Equation 10), again we utilize the data distribution prior. Note that the data points contribute to r are quantified by the following set:
E = {w⊤o x > 0 ∧w⊤x < −κ} ∪ {w⊤o x < 0 ∧w⊤x > κ} := E1 ∪ E2
where wo is the oracle classifier such that the true label is generated according to y = sgn(w⊤o x). Let ϕ represent the probability density function of standard Gaussian, we see that
r = ∫ E ϕ(x|Σ)dx = 2 ∫ E1 ϕ(x|Σ)dx
≤2 ∫ {w⊤·x<−κ} ϕ(x|Σ)dx = 2Φ ( − κ√ w⊤Σw ) ≤2Φ ( − κ
D √ λ
)
where λ is the largest eigenvalue of Σ. Therefore, we have the following estimation:
1
T 3 4 T−1∑ t=1 rt ≤ 1 T 3 4 T−1∑ t=1 2Φ ( − κt D √ λ )
≤ 2 T 3 4 T−1∑ t=1 ϕ(κt/(D √ λ)) κt/(D √ λ)
(Gaussian upper tail bound)
= 2D √ λ√
2π(1 + ε)
1
T 3 4 T−1∑ t=1
1
log(ζT − t) e−
(1+ε)2 2D2λ log2(ζT−t)
≤ 2D √ λT 1 4
√ 2π(1 + ε)
1 log((ζ − 1)T + 1) 1
((ζ − 1)T + 1) (1+ε)2 2D2λ log((ζ−1)T+1)
≤ cε,ζ,λT−β
(11)
where β = (1+ε) 2
2D2λ − 1 4 and we assume log((ζ − 1)T + 1) = Ω(1) with respect to T . Together we
prove the theorem 2.2.
C GENERALIZATION
Sorscher et al. (2022) analysed the generalization of static training scheme in the teacher-student perceptron setting, where the teacher is an "oracle" generating labels. For the training set T = {xi, yi}|T |i=1, assume xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o xi ) for all i. Without loss of generality, the oracle is assumed to be drawn form a sphere. Sorscher et al. (2022) works in a high dimensional statistics where |T |, d → ∞ but the ratio α = |T |/d remains O(1). Following the static training scheme, a lower fidelity estimator westimate which has angle θ relative to the oracle wo is used to evaluate the candidate instances, and those with smaller classification margin |w⊤estimatexi| along the estimator westimate are picked. The selection results in a subset S. S follows p(z), a truncated Gaussian distribution along westimate, while the other directions are still kept isotropic. More specifically, given a keeping ratio γ, the corresponding selection margin is
κ = H−1 ( 1−γ 2 ) and thus the subset distribution along westimate is p(z) = e −z2/2 √ 2πγ
Θ(κ−|z|), where Θ(x) is the Heaviside function and H(x) = 1 − Φ(x) where Φ(x) is the cumulative distribution function (CDF) of standard Gaussian.
The generalization error of the model trained on the subset S takes the form E(α, γ, θ). That is, the error is determined by γ the keeping ratio, α which indicates the abundance of training samples before selection, and θ which shows the closeness of the estimator to the oracle model. The full set of self-consistent equations characterizing E(α, γ, θ) is given as
R− ρ cos θ sin2 θ = α πΛ 〈∫ ν −∞ dτ exp ( −∆(τ, z) 2Λ2 ) (ν − τ) 〉 z
1− ρ 2 +R2 − 2ρR cos θ
sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ ) (ν − t)2 〉 z
ρ−R cos θ sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ )( z − ρτ 1− ρ2 ) (ν − τ)
+ 1
2πΛ exp
( −∆(τ, z)
2Λ2 )( ρR− cos θ 1− ρ2 ) (ν − τ) 〉 z
(12) Where,
Λ = √ sin2 θ −R2 − ρ2 + 2ρR cos θ
Γ(t, z) = z(ρR− cos θ)− τ(R− ρ cos θ) ∆(t, z) = z2 ( ρ2 + cos2 θ − 2ρR cos θ ) + 2τz(R cos θ − ρ) + τ2 sin2 θ
τ is an auxiliary field introduced by Hubbard-Stratonovich transformation. ⟨·⟩z denotes expectation on p(z). By solving these equations the generalization error can be easily read off as E = cos−1(R)/π, where R = w
⊤wo ∥w∥2·∥wo∥ .
D MORE RESULTS ON GENERALIZATION
Er ro
r
Er ro
r
To better understand the generalization under classification margin selection E(α, γ, θ), we provide more results to individually inspect the effect of (on average) select ratio γavg, initial data abundance α and the closeness of the estimator to the oracle mode θ. As shown in Figure 6(a), we changed γavg from 60% to 50%, thus constructing a smaller selection budget case. In Figure 6(b), we use α = 2.1 instead of α = 3.2 to construct a less abundant data case, where the data before selection is insufficient. In Figure 6(c), we start selecting samples using a better estimator θ = 30◦ instead of θ = 40◦. All the other hyper-parameters aside from the inspected one are kept consistent to those used Figure 2(b), that is, γavg = 0.6, α = 3.2 and θ = 40◦. We see that with various γavg and θ,
DynaMS outperforms its static counterpart. The abundance of initial data, however, significantly
affects. When data is insufficient, data selection, both static as well as dynamic cause obvious performance degradation. Figure 7 shows a even more serious α = 1.7, the generalization landscape is significantly changed and data selection is not recommended in this case.
E COMPARISON WITH STANDARD DEVIATION
We test each method in Table 4 and Table 5 5 times. The averaged accuracy and standard deviation are reported below in Table 6 and Table 7.
F IMPLEMENTATION DETAILS AND HYPER-PARAMETERS
Subset size schedule Dynamic Selection admits more freedom in subset size schedule. In the experiments we consider the linear schedule and the power schedule. For linear schedule, the keeping ratio is determined by γk = 1− k · a for k = 1, 2, . . . ,K, where a determines the sample reduction
ratio. γ is supposed to satisfy γavg = 1K ∑K
k=1 γk = γs where γs is the selection ratio when a static training scheme is applied. Thus 1K ∑K k=1 |Tk| = |S|, meaning the averaged number of data used in the dynamic scheme is kept equal to that of static training.
Aside from the linear scheduler, we also explore a power schedule where γk = m · k−r + b for k = 1, 2, . . . ,K. Power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. Determining these hyper-parameters m, r, b is a bit tricky, we just require γ1 = 1.0 to warm start and γavg = 1K ∑K k=1 γk = γs for fair comparison. γK should not be overly small, we empirically find γK ≈ γ − 0.1 yield good results. For different budget γs = {0.6, 0.7, 0.8, 0.9} the hyper-parameters are given in Appendix F, Table 8. Post process is carried out to make sure the resulting subset size sequence satisfy the above requirements.
(Killamsetty et al., 2021) utilize a constant schedule, where in each selection the subset size is kept constant as γs · |T |. This schedule however, do not admit selection without replacement. Linear and power schedule are all monotonically decreasing, thus are natural choices considering this. Figure 8 plots the three schedules on γs = 0.6 budget. In this paper we just provide a primary exploration on the subset size schedule, in depth study on the relationship between the subset size and the model performance as well as an automatic way determining the optimal subset size schedule is left for future work.
Hyper-parameters Finally, the detailed hyper-parameters for DynaMS on both CIFAR-10 and ImageNet datasets are shown in Table 8. Note that for DynaMS+PSP, the Max Epochs is set to be 90 on ImageNet.
Table 8: Hyper-parameters of DynaMS for different models on CIFAR-10 and ImageNet.
Hyper-parameters CIFAR-10 ImageNet
ResNet-18 ResNet-18 ResNet-50
Batch Size 128 512 512 Init. Learning Rate of W 0.1 0.1 0.1 Learning Rate Decay Stepwise 0.2 Stepwise 0.1 Stepwise 0.1 Lr Decay milestones {60,120,160} {40,80} {40,80} Optimizer SGD SGD SGD Momentum 0.9 0.9 0.9 Nestrov True True True Weight Decay 5e-4 1e-4 1e-4 Max Epochs 200 120 120 Selection interval 10 10 10
Power Scheduler -
60%: m = 0.3984, r = 0.2371, b = 0.2895 70%: m = 0.3476, r = 0.2300, b = 0.4275 80%: m = 0.3532, r = 0.1349, b = 0.4978 90%: m = 0.2176, r = 0.1035, b = 0.7078
Linear Scheduler
a = 0.041 60%: a = 0.073 - 70%: a = 0.055 - 80%: a = 0.036 - 90%: a = 0.018
G VISUALIZATION OF DYNAMICALLY SELECTED IMAGES
To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. For k = 1, k = 4, k = 7 and k = 10, which corresponds to the 1,4,7 and 10th selection, we randomly visualize selected samples that are absent in the latter selection. E.g. the k = 4 row shows images picked in the 4th selection but not in the 7th selection. From Figure 9, we see that in the early selections, amounts of easy-to-recognize samples are kept. As the training proceeds, these simple images are screened out and the model focuses more on harder samples that are atypical, blurred, or with interfering objects, validating our hypothesis that samples most informative change as the model evolves. Dynamic selection is thus indispensable.
H SUMMARY OF NOTATIONS
Models and Parameters f(·) The model used for classification w Parameters of the model w∗ Optimal model parameter wo Oracle model parameter W Parameter of the linear classifier W Kernel of a convolutional layers g gradient incurred by the model gproxy gradient incurred by the proxy d The dimension of data feature h(·) Feature extractor part of the model f(·) p Slimming factor, deciding the width of the proxy model
Selection schedule a Sample reduction ratio in the linear schedule m, r, b Hyper-parameters controlling the power schedule
Loss Functions L Generic reference to the loss function Data Selection B Decision boundary of linear classifiers
Q Selection interval M The classification margin aka. distance of a sample to decision boundary γk Selection budget, keep ratio of samples for the kth selection γavg The averaged keep ratio of dynamic selection γs Selection budget in static selection. k Selection step K The total number of selections along training E The generalization error of model trained on selected subset θ Relative angle of a model to the oracle model. α Aboundance of data before selection κ Selection margin.
Train t Training epoch T The total number of training epochs, T = Q · (K + 1)
Data Distribution Σ Covariance of a Gaussian distribution λ The largest eigenvalue of the covariance matrix
Hyper-parameters D Upper bound of model parameter norm ε, ζ, µ Constants appear in the convergence bound. | 1. What is the main contribution of the paper regarding data selection strategies?
2. What are the strengths of the proposed method, particularly in terms of efficiency and performance?
3. What are the weaknesses of the paper, especially regarding the need for PSP and training slimming networks?
4. Do you have any concerns about the reporting of results in Figure 2(a)?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to dynamically select partial training data for efficient learning. The main idea is updating informative subset according to their margin to class decision boundary. A parameter sharing proxy strategy is devised to further evaluate instance prior. As a result, the proposed method achieves superior performance compared with other SOTA data selection strategies on different budgets.
Strengths And Weaknesses
Strength:
Selecting informative subset by class margin is reasonable, and the generalization of DynaMS has been well supported both in practice and theory.
The performance is superior compared with previous SOTA methods.
This paper is well written and the implementation is described in detail which is easy to follow.
Weaknesses:
The need for PSP is not significant. As mentioned in this paper, the selection procedure is efficient and its overhead is negligible since it is conducted only 19 times during training (200 epochs in total).
Training slimming networks is not efficient because multiple sub-networks are trained separately. For example, compared to the original training time, with a 60% budget, the overall training cost of DynaMS+PSP is
0.6
×
(
1
+
9
16
+
1
4
+
1
16
)
>
1
, while other methods almost strictly cost 0.6 of the original one.
For the results shown in Figure 2(a), only 10 epoch training on one dataset is not convincing enough. Why not report overlap ratio of all evaluated epochs on both CIFAR-10 and ImageNet?
Clarity, Quality, Novelty And Reproducibility
This paper is presented well and the idea of dynamic selection based on class margin is natural and reasonable. The whole pipeline is novel and easy to reproduce with the detailed implementation. |
ICLR | Title
DynaMS: Dyanmic Margin Selection for Efficient Deep Learning
Abstract
The great success of deep learning is largely driven by training over-parameterized models on massive datasets. To avoid excessive computation, extracting and training only on the most informative subset is drawing increasing attention. Nevertheless, it is still an open question how to select such a subset on which the model trained generalizes on par with the full data. In this paper, we propose dynamic margin selection (DynaMS). DynaMS leverages the distance from candidate samples to the classification boundary to construct the subset, and the subset is dynamically updated during model training. We show that DynaMS converges with large probability, and for the first time show both in theory and practice that dynamically updating the subset can result in better generalization. To reduce the additional computation incurred by the selection, a light parameter sharing proxy (PSP) is designed. PSP is able to faithfully evaluate instances following the underlying model, which is necessary for dynamic selection. Extensive analysis and experiments demonstrate the superiority of the proposed approach in data selection against many state-of-the-art counterparts on benchmark datasets.
1 INTRODUCTION
Deep learning has achieved great success owing in part to the availability of huge amounts of data. Learning with such massive data, however, requires clusters of GPUs, special accelerators, and excessive training time. Recent works suggest that eliminating non-essential data presents promising opportunities for efficiency. It is found that a small portion of training samples 1 contributes a majority of the loss (Katharopoulos & Fleuret, 2018; Jiang et al., 2019), so redundant samples can be left out without sacrificing much performance. Besides, the power law nature (Hestness et al., 2017; Kaplan et al., 2020) of model performance with respect to the data volume indicates that loss incurred by data selection can be tiny when the dataset is sufficiently large. In this sense, selecting only the most informative samples can result in better trade-off between efficiency and accuracy.
The first and foremost question for data selection is about the selection strategy. That is, how to efficiently pick training instances that benefit model training most. Various principles have been proposed, including picking samples that incur larger loss or gradient norm (Paul et al., 2021; Coleman et al., 2020), selecting those most likely to be forgotten during training, as well as utilizing subsets that best approximate the full loss (Feldman, 2020) or gradient (Mirzasoleiman et al., 2020; Killamsetty et al., 2021). Aside from selection strategies, existing approaches vary in the training schemes which can be divided roughly into two categories: static ones and dynamic (or adaptive) ones. Static methods (Paul et al., 2021; Coleman et al., 2020; Toneva et al., 2019) decouple the subset selection and the model training, where the subset is constructed ahead and the model is trained on such a fixed subset. Dynamic methods (Mindermann et al., 2022; Killamsetty et al., 2021), however, update the subset in conjunction with the training process. Though effectively eliminates amounts of samples, it is still not well understood how these different training schemes influence the final model.
∗Corresponding author 1We use the terms data, sample, and instance interchangeably
In this paper, we propose dynamic margin selection (DynaMS). For the selection strategy, we inquire the classification margin, namely, the distance to the decision boundary. Intuitively, samples close to the decision boundary influence more and are thus selected. Classification margin explicitly utilizes the observation that the decision boundary is mainly determined by a subset of the data. For the training scheme, we show the subset that benefits training most varies as the model evolves during training, static selection paradigm may be sub-optimal, thus dynamic selection is a better choice. Synergistically integrating classification margin selection and dynamic training, DynaMS is able to converge to the optimal solution with large probability. Moreover, DynaMS admits theoretical generalization analysis. Through the lens of generalization analysis, we show that by catching the training dynamics and progressively improving the subset selected, DynaMS enjoys better generalization compared to its static counterpart.
Though training on subsets greatly reduces the training computaiton, the overhead introduced by data evaluation undermines its significance. Previous works resort to a lighter proxy model. Utilizing a separate proxy (Coleman et al., 2020), however, is insufficient for dynamic selection, where the proxy is supposed to be able to agilely adapt to model changes. We thus propose parameter sharing proxy (PSP), where the proxy is constructed by multiplexing part of the underlying model parameters. As parameters are shared all along training, the proxy can acutely keep up with the underlying model. To train the shared network, we utilize slimmable training (Yu et al., 2019) with which a well-performing PSP and the underlying model can be obtained in just one single train. PSP is especially demanding for extremely large-scale, hard problems. For massive training data, screening informative subset with a light proxy can be much more efficient. For hard problems where model evolves rapidly, PSP timely updates the informative subset, maximally retaining the model utility.
Extensive experiments are conducted on benchmarks CIFAR-10 and ImageNet. The results show that our proposed DynaMS effectively pick informative subsets, outperforming a number of competitive baselines. Note that though primarily designed for supervised learning tasks, DynaMS is widely applicable as classifiers have become an integral part of many applications including foundation model training (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021; Chen et al., 2020), where hundreds of millions of data are consumed.
In summary, the contributions of this paper are three-folds:
• We establish dynamic margin select (DynaMS), which selects informative subset dynamically according to the classification margin to accelerate the training process. DynaMS converges to its optimal solution with large probability and enjoys better generalization.
• We explore constructing a proxy by multiplexing the underlying model parameters. The resulting efficient PSP is able to agilely keep up with the model all along the training, thus fulfill the requirement of dynamic selection.
• Extensive experiments and ablation studies demonstrate the effectiveness of DynaMS and its superiority over a set of competitive data selection methods.
2 METHODOLOGY
To accelerate training, we propose dynamic margin selection (DynaMS) whose framework is presented in Figure 1. Instances closest to the classification decision boundary are selected for training, and the resulting strategy is named margin selection (MS). We show that the most informative subset changes as the learning proceeds, so that a dynamic selection scheme that progressively improves the subset can result in better generalization. Considering the computational overhead incurred by selection, we then explore parameter sharing proxy (PSP), which utilizes a much lighter proxy model to evaluate samples. PSP is able to faithfully keep up with the underlying model in the dynamics selection scheme. The notations used in this paper are summarized in Appendix H
2.1 SELECTION WITH CLASSIFICATION MARGIN
Given a large training set T = {xi, yi}|T |i=1, data selection extracts the most informative subset S ⊂ T trained on which the model f(x) yields minimal performance degradation. Towards this end, we utilize the classification margin, that is, the distance to the decision boundary, to evaluate the informativeness of each sample. |S| examples with the smallest classification margin are selected.
Intuitively, these samples should be influential most to the model decision. Following (Mickisch et al., 2020; Emam et al., 2021), the decision boundary between two classes c1 and c2 ∈ {1, . . . C} is B := {x | fc1(x) = fc2(x)}, where fc(x) is the c entry of model output, indicating the probability of x belonging to class c. The classification margin is then:
M(x, c1, c2) = min δ
∥δ∥2 s.t. x+ δ ∈ B (1)
which is the minimal perturbation required to move x form c1 to c2. Directly computing the margin is infeasible for deep neural networks, so scoring is conducted in the feature space instead as in (Emam et al., 2021). Typically neural networks applies a linear classifier on top of the features (Goodfellow et al., 2016), so the classification margin M(x, c1, c2) can be easily obtained as: M(x, c1, c2) = (Wc1 −Wc2)⊤h(x)/ ∥Wc1 −Wc2∥2, where W ∈ Rd×C is the weight of the linear classifier 2 and h(x) is the feature of x. In this way, the classification margin of a labeled sample (x, y) along class c is M(x, y, c) if y ̸= c or minc̸̃=y M(x, y, c̃) if y = c. The former indicates the distance moving (x, y) to class c while the latter is the distance moving (x, y) to the nearest class other than y. To keep the subset balanced, we evenly pick |S|/C samples with the smallest classification margin along each class. The resulting strategy is named margin selection (MS), denoted as MS(w, T , |S|). The procedure is detailed in Algorithm 1 in Appendix A.
2.2 DYNAMIC SELECTION
Given the subset selected, model is subsequently trained on S. Conventional static training scheme assumes that the optimal subset converges and is not related to the model training dynamic (Paul et al., 2021; Coleman et al., 2020). Though effectively eliminate instances, the "converged optimal
2Without loss of generality, we omit the bias term for notation clarity
subset" assumption may be too strong. To investigate whether the most informative samples vary during training, we plot the overlap ratio of samples selected in two consecutive selections during the training of ResNet models, shown in Figure 2(a). We train for 200 epochs and 120 epochs on CIFAR-10 and ImageNet respectively, and conduct selection every 10 epochs. It can be observed that the overlap ratio is on average 0.83 for CIFAR-10 and 0.73 for ImageNet rather than 1.0, meaning that samples that most benefit model training vary as the model evolves. A fixed subset may be outdated after parameter updates, thus yielding sub-optimal results.
We thus resort to a dynamic scheme where data selection is performed after each Q epochs training 3. By selecting in conjunction with training, the informative subset gets updated according to the current model status. For the kth selection, the informative subset Sk is constructed by picking portion γk samples so that |Sk| = γk|T |. The selection ratio γk determines the critical margin κk, where only samples with classification margin smaller than κk are kept. Sk will then be used for training Q epochs. In the following, we provide a convergence analysis of DynaMS and show that DynaMS achieves better generalization by constantly improving the selected subset.
Convergence Analysis We now study the conditions for the convergence of training loss achieved by DynaMS. We use logistic regression (LR) to demonstrate and then show the conditions are well satisfied when LR is used on top of deep feature extractors. We have the following theorem: Theorem. Consider logistic regression f(x) = 1
1+e−w⊤x with N Gaussian training samples x ∼
N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (2)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
The proof is left in Appendix B. Theorem 2.2 indicates that dynamically selecting data based on the classification margin is able to converge and achieve the optima w∗ with large probability. The Gaussian input assumption is overly strong in general, but when the linear classifier is adopted on top of a wide enough feature extractor, the condition is well satisfied because a infinitely wide neural network resembles Gaussian process (Lee et al., 2019; Xiao et al., 2018; de G. Matthews et al., 2018).
Generalization Analysis Recently, (Sorscher et al., 2022) developed an analytic theory for data selection. Assume training data xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o · xi ) . Following static selection, when an estimator w is used to pick samples that have a small classification margin, the generalization error takes the form E(α, γ, θ) in the high dimensional limit. α = |T |d indicates the abundance of training samples before selection; γ determines the selection budget and θ = arccos ( w⊤wo
∥w∥2·∥wo∥
) shows the closeness of the
estimator to the oracle. The full set of self-consistent equations characterizing E(α, γ, θ) is given in Appendix C. By solving these equations the generalization error E(α, γ, θ) can be obtained. We then extend it to the dynamic scheme. For the kth selection, we use the model trained on Sk−1 as the estimator wk, which deviates from oracle by angle θk = arccos ( w⊤k−1wo
∥wk−1∥2·∥wo∥2
) , to evaluate
and select samples. The resulting subset Sk will be used for subsequent training of model wk+1, which will later be used as an estimator at k + 1 to produce Sk+1. In this way, generalization of dynamic scheme can be obtained by recurrently solving the equations characterizing E(α, γk, θk) with updated keeping ratio γk and estimator deviation θk. Note that in each round of selection, samples are picked with replacement, so the abundance of training samples α is kept fixed. The keeping ratio γk, determining the subset size, can be scheduled freely to meet various requirements.
3For extremely large dataset case where training can be accomplished within just one or a few epochs, the selection can be performed every Q iterations
We compare the generalization of dynamic selection and its static counterpart in Figure 2(b). We show the landscape of E(α, γ, θ) with different γ and θ by solving the generalization equations numerically. α = 3.2 is kept fixed, which means the initial training data is abundant; We use static training with θs = 40◦ and γs = 0.6 as control group. To make the comparison fair, we make sure 1 K ∑K k=1 |γk| = γs, so that the averaged number of samples used in the dynamic scheme equals the subset size used in the static scheme. From Figure 2(b), we see that in dynamic selection, the estimator gets constantly improved (θk decreases), so that the subsets get refined and the model achieves better generalization. Discussion on selecting with different α, γand θ is given in Appendix D.
2.3 PARAMETER SHARING PROXY
With dynamic selection, the number of updates is reduced. However, the computational overhead incurred by data selection undermines its significance, especially when the model is complex and samples are evaluated frequently. Aside from designing efficient selection strategies, previous works explored utilizing a lighter model as proxy to evaluate the instances so that the problem can be ameliorated. Pretrain a separate proxy and evaluate instances prior to model training (Coleman et al., 2020), however, is insufficient for dynamic selection, as a static proxy can not catch the dynamics of the underlying model. A proxy that fulfills the requirements of dynamic selection is still absent.
We thus propose parameter sharing proxy (PSP), where part of the model is used as the proxy. Taking convolutional neural network as an example, for a layer with kernel W ∈ Rci×co×u×u, where ci, co and u are number of input filters, number of output filters and kernel size respectively, the corresponding kernel of proxy is then: Wproxy = W1:pci,1:pco,:,:, where p ∈ [0, 1] is a slimming factor. As shown in Figure 3, the proxy kernel is constructed with the first pci input channels and first pco output channels. A p times thinner proxy can be obtained by applying p to each layer.
With separate batch normalization for proxy and model, PSP forms a slimmable network (Yu et al., 2019), where multiple models of different widths are jointly trained and they all yield good performance. As the parameters are shared, the proxy can acutely keep up with the model change, thus applicable for dynamic selection. We further investigate the gradients alignment of the proxy and the original model through their cosine similarity:
cos(g, gproxy) = g⊤gproxy
∥g∥2 · ∥g∥2 , where g = ∇WL (W) , gproxy = ∇WL (Wproxy) (3)
A positive cosine value indicates gproxy stands in the same side with g, thus updates on proxy and the model benefits each other. We compare the gradient alignment of PSP and a stand-alone proxy in Figure 2(c) on ResNet-50. With p = 0.5, we see that cos(g, gproxy) for PSP is much larger than the stand-alone proxy. Given the well-aligned gradients, PSP requires fewer training epochs. Overall workflows of DynaMS and DynaMS+PSP is shown in Algorithm 2 and Algorithm 3 of Appendix A. PSP is especially advantageous for large and hard problems. When the data is extremely large, training PSP on a small subset is cheaper than evaluating the extremely large training set with the original model, making it much more efficient. When the task is hard and model changes rapidly during training, PSP can timely updates the informative subset, maximally retaining the model utility.
3 RELATED WORK
Accelerating training by eliminating redundant training instances has long been a research focus in academia. This is accomplished by adopting an effective selection strategy and an appropriate training scheme. We summarize the related literature from these two strands of research in the following.
Selection Strategy Sample selection can be accomplished with various principles. (Loshchilov & Hutter, 2015; Jiang et al., 2019; Paul et al., 2021) tend to pick samples that incur large loss or gradient norm (CE-loss, EL2N, GraNd). (Toneva et al., 2019) inspects the “unforgettable”
examples that are rarely misclassified once learned, and believes these samples can be omitted without much performance degradation. Other works adopt uncertainty. Samples with the least prediction confidence are preferred (Settles, 2010). Recently, (Mirzasoleiman et al., 2020; Killamsetty et al., 2021) select subset that best covers or approximates the full gradient (Craig, GradMatch). However, these requires per-sample gradient as well as an additional optimization which is expensive both in run-time and in memory. Our work utilizes the classification margin to identify informative samples,
which is efficient and can synergistically adapt to various training schemes. Comparison of these strategies is given in Table 1, where d is the dimension of data feature. MS is slightly slower than selection via loss (CE-loss and EL2N), but much more efficient than Craig and GradMatch. Here we consider only the complexity of the selection strategy itself, time spent for feature extraction is not included. Classification margin has been previously explored in the active learning literature (Ducoffe & Precioso, 2018; Emam et al., 2021), here we utilize it for training acceleration.
Training Schemes Data selection brings more options to training. Under the conventional static training scheme (Paul et al., 2021; Toneva et al., 2019; Coleman et al., 2020), data selection is conducted prior to model update, and the informative subset is kept fixed. Contrastively, online batch selection picks batch data each iteration (Loshchilov & Hutter, 2015; Alain et al., 2015; Zhang et al., 2019; Mindermann et al., 2022). Though sufficiently considered the training dynamics, the overly frequent sample evaluation incurs prohibitive computational overhead. Recently, (Killamsetty et al., 2021) tried selecting after several epochs’ training, which is similar to our dynamic scheme. However, the dynamic training scheme is just utilized as a compromise to avoid overly frequent selection. A formal analysis of its advantage over the static scheme is absent.
By systematically considering the selection strategy, the model training, as well as the proxy design, Our proposed DynaMS forms an effective data selection framework for efficient training.
4 EXPERIMENTS
In this section, we first analyse the effectiveness of each design ingredient in Section 4.2. Then we compare to state-of-the-art algorithms in Section 4.3. Code is available at https://github. com/ylfzr/DynaMS-subset-selection.
4.1 EXPERIMENTAL SETUP
We conduct experiments on CIFAR-10 Krizhevsky & Hinton (2009) and ImageNet Jia et al. (2009), following standard data pre-processing in He et al. (2016). A brief summarization of the experimental setup is introduced below, while complete hyper-parameter settings and implementation details can be found in Appendix F.
CIFAR-10 Experiments For CIFAR-10, we train ResNet-18 (He et al., 2016) for 200 epochs. Selection is conducted every 10 epochs, so overall there are 19 selections (K = 19). For subset size, we adopt a simple linear schedule: γk = 1− k · a for k = 1, . . . ,K, where a determines the reduction ratio. We make sure γavg = 1K ∑K k=1 γk = γs. In this way, the averaged number of data used in the dynamic scheme (γavg) is kept equal to that of static training (γs) for fair comparison. For 0.6× acceleration, a = 0.042. We conduct experiments on a NVIDIA Ampere A-100.
ImageNet Experiments For ImageNet, we choose ResNet-18 and ResNet-50 as base models. Following the conventions, the total training epoch is 120. Selection is also conducted every 10 epochs, so altogether K = 11. For subset size, aside from the linear schedule, we also explore a power schedule where γk decays following a power law: γk = m · k−r + b for k = 1, 2, . . . ,K. For 0.6× acceleration, we set m = 0.398, r = 0.237 and b = 0.290. Please see Appendix F for
more details. The power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. We conduct experiments on four NVIDIA Ampere A-100s.
4.2 ABLATION STUDIES
We use ResNet-50 on ImageNet to illustrate the effect of each ingredient in DynaMS, that is, the classification margin criteria, the dynamic training scheme as well as the parameter sharing proxy.
The effect of classification margin selection To inspect the effect of classification margin selection (MS), we compare MS against two widely applied selection strategies CE-loss (Loshchilov & Hutter, 2015; Jiang et al., 2019) and EL2N (Paul et al., 2021). CE-loss selects samples explicitly through the cross-entropy loss they incur while EL2N picks samples that incur large L2 error. We compare the three under the conventional static scheme so any other factors aside from the selection strategy is excluded. Samples are evaluated after 20 epochs of pretraining. The model is then reinitialized and trained on the selected subset, which contains 60% original samples. As shown in Table 2, MS achieves the best accuracy among the three, validating its effectiveness.
The effect of dynamic training We then apply dynamic selection on MS, where the average subset size is also kept to be 60% of the original dataset. From Table 2 we see that DynaMS outperforms MS by 1.67%, which is significant on large scale dataset like ImageNet. The superiority of DynaMS validates that by constantly improving the model and updating the subset, dynamic selection scheme can result in better performance. Note that DynaMS can be more practical since it does not require the 20 epochs training prior to selection as required in the static scheme.
The effect of parameter sharing proxy We now study the parameter sharing proxy (PSP). An effective proxy is supposed to be faithful, and can agilely adapt to model updates. In Figure 4, we plot the Spearman rank correlation as well as the overlap ratio of samples selected with the proxy and the model. We see that all along the training, the rank correlation is around 0.68, and over 78% samples selected are the same, indicating that the proxy and the model are fairly consistent. We then investigate how will the complexity, measured by floating point operations (FLOPs), of proxy affect. We enumerate over the slimming factor p ∈ {0.25, 0.5, 0.75, 1.0} to construct proxies of different widths, the corresponding FLOPs are 6.25%, 25.00%, 56.25%, 100% respectively. In Table 3, we see that significant computation reduction can be achieved with moderate performance degradation.
4.3 COMPARISONS WITH STATE-OF-THE-ARTS
Finally, we compare DynaMS against various state-of-the-art methods. Aside from CE-loss and EL2N, Random picks samples uniformly at random. GraNd (Paul et al., 2021) select samples that incur large gradient norm. Forget (Toneva et al., 2019) counts how many times a sample is mis-classified (forget) after it is learned. Samples more frequently forgotten are preferred. We evaluate the forget score after 60 epochs training. To avoid noisy evaluation, many of these static selection approaches ensembles networks before selection. The number of ensambled models is given by the subscription. Auto-assist (Zhang et al., 2019) select samples that incur large loss value on a small proxy. Selection is conducted in each iteration thus forming an online batch selection (OLBS) scheme. DynaCE and DynaRandom apply the corresponding selection strategy, but are trained in a dynamic way. CRAIG and GradMatch propose to reweight and select subsets so that they best cover or approximate the full gradient. In the experiments, we use the per-batch variant of CRAIG and
Figure 4: Correlation of proxy and model.
20 40 60 80
0.66
0.68
0.70
0.72
Sp ea
rm an
C or
re la
ti on
20 40 60 80 Epochs
0.76
0.77
0.78
0.79
0.80
O ve
rl ap
R at
io
GradMatch proposed in (Killamsetty et al., 2021) with 10 epoch warm start 4. The two approaches utilize dynamic selection scheme, all the training settings are kept the same as our DynaMS.
In table 4, average accuracy from 5 runs on CIFAR-10 as well as their running time are reported. Due to limited space, the standard deviation is given in Appendix E. We see that DynaMS achieves comparable performance against the strongest baselines (EL2N10, GraNd10, Forget10) while being more efficient. Note that the static methods require pretraining one or several models for 20 epochs before selection. Considering this cost (subscript of the reported running time), the acceleration of these methods is less significant. We also compare two online batch selection methods, OnlineMS and Auto-assist (Zhang et al., 2019). OnlineMS picks samples with MS, but the selection is conducted each iteration. OnlineMS didn’t outperform DynaMS, meaning more frequent selection is not necessary. Rather, selecting at each optimization step incurs prohibitive computational overhead. Auto-assist didn’t get good performance in this experiment. This may results from the overly simple proxy. The logistic regression proxy adopted may not sufficiently evaluate the candidate samples.
4For cifar-10, we use the published implementation from https: //github.com/decile-team/cords. For ImageNet, we modify the implementation to the distributed setting.
For ImageNet, we also report the average accuracy from 5 runs as well as their running time. The standard deviation is given in Appendix E DynaMS outperforms all the baselines. For instance, it achieves 68.65% and 74.56% top-1 accuracy given on average 60% samples for ResNet-18 and ResNet-50 respectively, surpassing the most competitive counterpart Forget by 0.81% and 1.06%. Compared to the static methods which require additional pretraining, 60 epochs for Forget and 20 for the others, DynaMS is much more efficient. CRAIG and GradMatch didn’t get good performance on ImageNet. This might because we use the per-batch variant in (Killamsetty et al., 2021), and set batch size 512 in order to fit the per-sample gradients into memory. The per-batch variant treats each mini-batch as one sample and selects mini-batches during the gradient matching process. So a larger batch size means more coarse grain selection which may lead to inferior performance. We also compare a variant DynaRandom. DynaRandom adopts the dynamic selection scheme but a random subset is constructed at each selection. DynaMS outperforms DynaRandom by 1.06% and 1.93% for ResNet-18 and ResNet-50 respectively, indicating that the superiority of DynaMS over static methods comes from effectively identifying informative samples instead of witnessing more data.
ResNet-50 is rather complex and the data evaluation time is non-negligible. We thus apply parameter sharing proxy to reduce the evaluation time. The proxy is 0.5× width so the evaluation requires around 0.25× computation compared to the original model. As the gradients of the proxy and the underlying model are well aligned, we only train DynaMS+PSP for 90 epochs. From table 5, though utilizing a proxy harms performance compared to DynaMS, it still outperforms all the other baselines. Specifically, SVP also uses a proxy for sample evaluation. The proxy, however, is a statically fully trained ResNet-18. The superiority of DynaMS+PSP over SVP shows the necessity of a dynamic proxy that agilely keeps up with the change of underlying model. The advantages of DynaMS+PSP over DynaMS on efficiency can be significant for extremely large scale problems where massive data is available while only a small fraction of data is sufficient for training. To further demonstrate DynaMS, we draw the accuracy curvature of ResNet-50 against different (on average) sample budgets from 60% to 100% in Figure 5. It can be found that our DynaMS consistently outperforms all the other data selection strategies on different budgets. Finally, To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. See G for more details.
5 CONCLUSION
In this paper, we propose DynaMS, a general dynamic data selection framework for efficient deep neural network training. DynaMS prefers samples that are close to the classification boundary and the selected "informative" subset is dynamically updated during the model training. DynaMS has a high probability to converge and we pioneer to show both in practice and theory that dynamic selection improves the generalization over previous approaches. Considering the additional computation incurred by selection, we further design a proxy available for dynamic selection. Extensive experiments and analysis are conducted to demonstrate the effectiveness of our strategy.
A APPENDIX
A ALGORITHM PROCEDURE
Algorithm 1 outlines the procedure of margin selection (MS). In MS, distances of the current sample (x, y) to each other class c are computed. If y ̸= c, the classification margin of (x, y) and class c is M(x, y, c), which is the distance of moving x from class y to class c. If y = c, the classification margin is minc̃ ̸=y M(x, y, c̃), which corresponds to the distance moving (x, y) to another class that is the most close to x. For the whole candidate set T , this generates a |T | × C score matrix. After the classification margins are obtained, |S|/C samples with the smallest classification margin along each class are picked. This keeps samples collected in the subset balanced.
Algorithm 1 Margin selection: MS(w, T , γ) Input:
Candidate set T , keeping ratio γ, number of classes C; Network with weights w, including weights of the final classification layer W ;
Output: Selected subset according to the classification margin S.
1: Compute the keeping budget |S| = γ · |T |, initialize the subset S = {} // Evaluating: compute the classification margin. 2: for (x, y) ∈ T do 3: for c = 1 : C do 4: Compute the classification margin of the sample to the (y, c) boundary:
M(x, y, c) =
{ minc̸̃=y M(x, y, c̃) y = c
M(x, y, c) y ̸= c (4)
5: end for 6: end for
// Selecting: pick the samples according to classification margin (Equation 4.) 7: for c = 1 : C do 8: Pick |S|/C samples which have the smallest classification margins (M(·)): Top|S|/C(c). 9: S = S ⋃ Top|S|/C(c)
10: Remove the already selected samples from the candidate set: T = T − Top|S|/C(c) 11: end for
Algorithm 2 Dynamic margin selection (DynaMS) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wt, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via stochastic gradient descent on Sk.
10: end for
Algorithm 3 Dynamic margin selection (DynaMS) with parameter sharing proxy (PSP) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q Slimming factor of the proxy r, thus the proxy weights Wproxy is determined.
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wtproxy, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via optimizing L(W) + L(Wproxy) on Sk. (Slimmable training)
10: end for
A full workflow of efficient training with the proposed dynamic margin selection (DynaMS) is shown in Algorithm 2. The model is first trained on the full dataset T for Q epochs to warm up. Subset selection kicks in each Q epochs, samples are evaluated with the current model so the informative subset gets updated according to the distance of samples to the classification boundary. After selection, the model is trained on the selected subset until the next selection. The workflow incorporating parameter sharing proxy is shown in Algorithm 3. Different from naive DynaMS, samples are evaluated and selected with the proxy instead of the underlying model. During the Q epochs’ training, the proxy and the original model are updated simultaneously with slimmable training (Yu et al., 2019).
B PROOF FOR THEOREM 2.2
To prove Theorem 2.2, we first inspect the norm of x. We get the following lemma.
Lemma 1. For Gaussian data x ∼ N (0,Σ), let µ > 0, T > 1 be constants, d the dimension of x and λ the largest eigenvalue of the covariance Σ, then with probability at least 1 − 1µTd , ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 .
Proof of Lemma 1. For x ∼ N (0,Σ), ∥x∥22 follows a generalized chi-squared distribution. The mean and variance can be computed explicitly as E[∥x∥22] = trΣ = ∑ j λj and Var(∥x∥ 2 2) =
2trΣ2 = 2 ∑
j λ 2 j . By Chebyshev’s inequality, we have
Pr ∥x∥22 <∑λj +√µTd√2∑ j λ2j > 1− 1 µTd
where µ > 0 and T > 1 are constants and d is the dimension of x. Then as ∑ λj + √ µTd √ 2 ∑ j λ 2 j ≤ (1 + √ 2µT )dλ where λ = maxj λj is the largest eigenvalue of the
covariance Σ, we have: Pr ( ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 ) > 1− 1
µTd (5)
Then we can start proving Theorem 2.2.
Theorem. Consider logistic regression f(x) = 1 1+e−w⊤x with N Gaussian training samples x ∼ N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be
the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (6)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
Proof of Theorem 2.2. For logistic regression f(x) = 1 1+e−w⊤x with loss function
L = 1 N N∑ i=1 ℓi = 1 N N∑ i=1 −yi log ŷi − (1− yi) log(1− ŷi) (7)
Where ŷi is the predicted value. The gradient incurred training on the selected subset is then:
∂Lκ ∂w = 1 N N∑ i=1 (ŷi − yi)xi · I(|w⊤xi| < κ)
For those |w⊤xi| ≥ κ or "easy" samples, we have | sgn(yi − 12 ) ·w ⊤xi| ≥ κ and with probability at least 1− 1µTd ∥∥∥∥ ∂ℓi∂w ∥∥∥∥ 2 ≤ { E·T 1 4 1+eκ if sgn(yi − 1 2 ) ·w ⊤xi ≥ κ E · T 14 if sgn(yi − 12 ) ·w ⊤xi ≤ −κ (8) where E = √ dλ(1 + (2µ)
1 4 ). Note that the condition sgn(yi − 12 ) · w ⊤xi ≤ −κ means xi is misclassified by w as well as the margin is at least κ. Denote the portion of this kind of misclassified sample in the whole training set by r, we have the estimate of the gradient gap
Errt = ∥∥∥∥∂Lκ∂w − ∂L∂w ∥∥∥∥ 2 = 1 N ∥∥∥∥∥∥ ∑
|w⊤x|≥κ
∂ℓ
∂w (x) ∥∥∥∥∥∥ 2
≤ET 1 4 (1− γt) 1 + eκt + ET 1 4 (1− γt)rt
(9)
Where γt is the fraction of data kept by selecting with margin κt. The inequality holds with probability at least (1− 1µTd ) N > 1− αµT because of Equation 8.
Note that Lemma 1 also suggest ∥∥ ∂ℓ ∂w ∥∥ 2 ≤ E · T 14 with large probability, therefore L is highly likely to be Lipschitz continuous with parameter ET 1 4 . By setting a constant learning rate η = DN
E √ T , and critical margin κt = (1+ε) log(ζT −t), ζ > 1, we have with probability at least ( 1− αµT )T ≥ 1− αµ min t L(wt)− L(w∗) ≤ DE
NT 1 4
+ D
T T−1∑ t=1 Errt
≤ DE NT 1 4 + DE T 3 4 T−1∑ t=1
1
(ζT − t)1+ε +
DE
T 3 4 T−1∑ t=1 rt
≤ DE T 1 4
( 1
N +
cε,ζ
T ε √ T
) + DE
T 3 4 T−1∑ t=1 rt
(10)
The first inequality follows the Theorem 1 in (Killamsetty et al., 2021). The last inequality holds because ∑T−1 t=1 1 (ζT−t)1+ε ≤ ∫ ζT (ζ−1)T 1 s1+ε ds ≤ cε,ζ T ε with cε,ζ = 1 ε(ζ−1)ε ,∀ε > 0 and ζ > 1.
To bound the sum of classification error (the last term of Equation 10), again we utilize the data distribution prior. Note that the data points contribute to r are quantified by the following set:
E = {w⊤o x > 0 ∧w⊤x < −κ} ∪ {w⊤o x < 0 ∧w⊤x > κ} := E1 ∪ E2
where wo is the oracle classifier such that the true label is generated according to y = sgn(w⊤o x). Let ϕ represent the probability density function of standard Gaussian, we see that
r = ∫ E ϕ(x|Σ)dx = 2 ∫ E1 ϕ(x|Σ)dx
≤2 ∫ {w⊤·x<−κ} ϕ(x|Σ)dx = 2Φ ( − κ√ w⊤Σw ) ≤2Φ ( − κ
D √ λ
)
where λ is the largest eigenvalue of Σ. Therefore, we have the following estimation:
1
T 3 4 T−1∑ t=1 rt ≤ 1 T 3 4 T−1∑ t=1 2Φ ( − κt D √ λ )
≤ 2 T 3 4 T−1∑ t=1 ϕ(κt/(D √ λ)) κt/(D √ λ)
(Gaussian upper tail bound)
= 2D √ λ√
2π(1 + ε)
1
T 3 4 T−1∑ t=1
1
log(ζT − t) e−
(1+ε)2 2D2λ log2(ζT−t)
≤ 2D √ λT 1 4
√ 2π(1 + ε)
1 log((ζ − 1)T + 1) 1
((ζ − 1)T + 1) (1+ε)2 2D2λ log((ζ−1)T+1)
≤ cε,ζ,λT−β
(11)
where β = (1+ε) 2
2D2λ − 1 4 and we assume log((ζ − 1)T + 1) = Ω(1) with respect to T . Together we
prove the theorem 2.2.
C GENERALIZATION
Sorscher et al. (2022) analysed the generalization of static training scheme in the teacher-student perceptron setting, where the teacher is an "oracle" generating labels. For the training set T = {xi, yi}|T |i=1, assume xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o xi ) for all i. Without loss of generality, the oracle is assumed to be drawn form a sphere. Sorscher et al. (2022) works in a high dimensional statistics where |T |, d → ∞ but the ratio α = |T |/d remains O(1). Following the static training scheme, a lower fidelity estimator westimate which has angle θ relative to the oracle wo is used to evaluate the candidate instances, and those with smaller classification margin |w⊤estimatexi| along the estimator westimate are picked. The selection results in a subset S. S follows p(z), a truncated Gaussian distribution along westimate, while the other directions are still kept isotropic. More specifically, given a keeping ratio γ, the corresponding selection margin is
κ = H−1 ( 1−γ 2 ) and thus the subset distribution along westimate is p(z) = e −z2/2 √ 2πγ
Θ(κ−|z|), where Θ(x) is the Heaviside function and H(x) = 1 − Φ(x) where Φ(x) is the cumulative distribution function (CDF) of standard Gaussian.
The generalization error of the model trained on the subset S takes the form E(α, γ, θ). That is, the error is determined by γ the keeping ratio, α which indicates the abundance of training samples before selection, and θ which shows the closeness of the estimator to the oracle model. The full set of self-consistent equations characterizing E(α, γ, θ) is given as
R− ρ cos θ sin2 θ = α πΛ 〈∫ ν −∞ dτ exp ( −∆(τ, z) 2Λ2 ) (ν − τ) 〉 z
1− ρ 2 +R2 − 2ρR cos θ
sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ ) (ν − t)2 〉 z
ρ−R cos θ sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ )( z − ρτ 1− ρ2 ) (ν − τ)
+ 1
2πΛ exp
( −∆(τ, z)
2Λ2 )( ρR− cos θ 1− ρ2 ) (ν − τ) 〉 z
(12) Where,
Λ = √ sin2 θ −R2 − ρ2 + 2ρR cos θ
Γ(t, z) = z(ρR− cos θ)− τ(R− ρ cos θ) ∆(t, z) = z2 ( ρ2 + cos2 θ − 2ρR cos θ ) + 2τz(R cos θ − ρ) + τ2 sin2 θ
τ is an auxiliary field introduced by Hubbard-Stratonovich transformation. ⟨·⟩z denotes expectation on p(z). By solving these equations the generalization error can be easily read off as E = cos−1(R)/π, where R = w
⊤wo ∥w∥2·∥wo∥ .
D MORE RESULTS ON GENERALIZATION
Er ro
r
Er ro
r
To better understand the generalization under classification margin selection E(α, γ, θ), we provide more results to individually inspect the effect of (on average) select ratio γavg, initial data abundance α and the closeness of the estimator to the oracle mode θ. As shown in Figure 6(a), we changed γavg from 60% to 50%, thus constructing a smaller selection budget case. In Figure 6(b), we use α = 2.1 instead of α = 3.2 to construct a less abundant data case, where the data before selection is insufficient. In Figure 6(c), we start selecting samples using a better estimator θ = 30◦ instead of θ = 40◦. All the other hyper-parameters aside from the inspected one are kept consistent to those used Figure 2(b), that is, γavg = 0.6, α = 3.2 and θ = 40◦. We see that with various γavg and θ,
DynaMS outperforms its static counterpart. The abundance of initial data, however, significantly
affects. When data is insufficient, data selection, both static as well as dynamic cause obvious performance degradation. Figure 7 shows a even more serious α = 1.7, the generalization landscape is significantly changed and data selection is not recommended in this case.
E COMPARISON WITH STANDARD DEVIATION
We test each method in Table 4 and Table 5 5 times. The averaged accuracy and standard deviation are reported below in Table 6 and Table 7.
F IMPLEMENTATION DETAILS AND HYPER-PARAMETERS
Subset size schedule Dynamic Selection admits more freedom in subset size schedule. In the experiments we consider the linear schedule and the power schedule. For linear schedule, the keeping ratio is determined by γk = 1− k · a for k = 1, 2, . . . ,K, where a determines the sample reduction
ratio. γ is supposed to satisfy γavg = 1K ∑K
k=1 γk = γs where γs is the selection ratio when a static training scheme is applied. Thus 1K ∑K k=1 |Tk| = |S|, meaning the averaged number of data used in the dynamic scheme is kept equal to that of static training.
Aside from the linear scheduler, we also explore a power schedule where γk = m · k−r + b for k = 1, 2, . . . ,K. Power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. Determining these hyper-parameters m, r, b is a bit tricky, we just require γ1 = 1.0 to warm start and γavg = 1K ∑K k=1 γk = γs for fair comparison. γK should not be overly small, we empirically find γK ≈ γ − 0.1 yield good results. For different budget γs = {0.6, 0.7, 0.8, 0.9} the hyper-parameters are given in Appendix F, Table 8. Post process is carried out to make sure the resulting subset size sequence satisfy the above requirements.
(Killamsetty et al., 2021) utilize a constant schedule, where in each selection the subset size is kept constant as γs · |T |. This schedule however, do not admit selection without replacement. Linear and power schedule are all monotonically decreasing, thus are natural choices considering this. Figure 8 plots the three schedules on γs = 0.6 budget. In this paper we just provide a primary exploration on the subset size schedule, in depth study on the relationship between the subset size and the model performance as well as an automatic way determining the optimal subset size schedule is left for future work.
Hyper-parameters Finally, the detailed hyper-parameters for DynaMS on both CIFAR-10 and ImageNet datasets are shown in Table 8. Note that for DynaMS+PSP, the Max Epochs is set to be 90 on ImageNet.
Table 8: Hyper-parameters of DynaMS for different models on CIFAR-10 and ImageNet.
Hyper-parameters CIFAR-10 ImageNet
ResNet-18 ResNet-18 ResNet-50
Batch Size 128 512 512 Init. Learning Rate of W 0.1 0.1 0.1 Learning Rate Decay Stepwise 0.2 Stepwise 0.1 Stepwise 0.1 Lr Decay milestones {60,120,160} {40,80} {40,80} Optimizer SGD SGD SGD Momentum 0.9 0.9 0.9 Nestrov True True True Weight Decay 5e-4 1e-4 1e-4 Max Epochs 200 120 120 Selection interval 10 10 10
Power Scheduler -
60%: m = 0.3984, r = 0.2371, b = 0.2895 70%: m = 0.3476, r = 0.2300, b = 0.4275 80%: m = 0.3532, r = 0.1349, b = 0.4978 90%: m = 0.2176, r = 0.1035, b = 0.7078
Linear Scheduler
a = 0.041 60%: a = 0.073 - 70%: a = 0.055 - 80%: a = 0.036 - 90%: a = 0.018
G VISUALIZATION OF DYNAMICALLY SELECTED IMAGES
To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. For k = 1, k = 4, k = 7 and k = 10, which corresponds to the 1,4,7 and 10th selection, we randomly visualize selected samples that are absent in the latter selection. E.g. the k = 4 row shows images picked in the 4th selection but not in the 7th selection. From Figure 9, we see that in the early selections, amounts of easy-to-recognize samples are kept. As the training proceeds, these simple images are screened out and the model focuses more on harder samples that are atypical, blurred, or with interfering objects, validating our hypothesis that samples most informative change as the model evolves. Dynamic selection is thus indispensable.
H SUMMARY OF NOTATIONS
Models and Parameters f(·) The model used for classification w Parameters of the model w∗ Optimal model parameter wo Oracle model parameter W Parameter of the linear classifier W Kernel of a convolutional layers g gradient incurred by the model gproxy gradient incurred by the proxy d The dimension of data feature h(·) Feature extractor part of the model f(·) p Slimming factor, deciding the width of the proxy model
Selection schedule a Sample reduction ratio in the linear schedule m, r, b Hyper-parameters controlling the power schedule
Loss Functions L Generic reference to the loss function Data Selection B Decision boundary of linear classifiers
Q Selection interval M The classification margin aka. distance of a sample to decision boundary γk Selection budget, keep ratio of samples for the kth selection γavg The averaged keep ratio of dynamic selection γs Selection budget in static selection. k Selection step K The total number of selections along training E The generalization error of model trained on selected subset θ Relative angle of a model to the oracle model. α Aboundance of data before selection κ Selection margin.
Train t Training epoch T The total number of training epochs, T = Q · (K + 1)
Data Distribution Σ Covariance of a Gaussian distribution λ The largest eigenvalue of the covariance matrix
Hyper-parameters D Upper bound of model parameter norm ε, ζ, µ Constants appear in the convergence bound. | 1. What is the focus and contribution of the paper on subset selection for training DNNs?
2. What are the strengths of the proposed approach, particularly in reducing sample complexity?
3. What are the weaknesses of the paper regarding its limitations and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the practicality and effectiveness of the proposed approach in different scenarios? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors propose a subset selection method for training DNNs. The idea is to select the samples that are close to the margin dynamically. To reduce computation, the authors also propose Parameter Sharing Policy for sample selection. The authors show that the proposed DynaMS can converge tot the optimal solution with large probability. Experiments are conducted on CIFAR10 and ImageNet to show the effectiveness of the proposed approach.
Strengths And Weaknesses
Strength:
Reducing sample complexity is an important topic in deep learning. The authors propose some interesting ideas to address this problem.
Overall, the paper is clearly written and the proposed idea is intuitive.
Extensive ablation studies are conducted to show the benefit of the proposed approach.
Weaknesses:
The use cases of the proposed approach is not clear. If all the training samples are available for dynamic selection, then why not just use all the samples? Also, the authors did not show results of using all the samples. If sample selection is for computational complexity, there is also no time comparison.
Compared with dataset distillation, does the proposed approach have benefits in terms of accuracy and complexity?
It is not clear which samples are selected and how the samples selected will change over time.
Several typos:
i. "Our proposed DynaMS form" -> "Our proposed DynaMS forms" ii. "is keep fixed" -> "is kept fixed" iii. "as the model evolve" -> "as the model evolves"
Clarity, Quality, Novelty And Reproducibility
Over, the paper is clearly written. The quality is fair. The idea of dynamic selection is novel. Also, the paper seems easy to reproduce. |
ICLR | Title
DynaMS: Dyanmic Margin Selection for Efficient Deep Learning
Abstract
The great success of deep learning is largely driven by training over-parameterized models on massive datasets. To avoid excessive computation, extracting and training only on the most informative subset is drawing increasing attention. Nevertheless, it is still an open question how to select such a subset on which the model trained generalizes on par with the full data. In this paper, we propose dynamic margin selection (DynaMS). DynaMS leverages the distance from candidate samples to the classification boundary to construct the subset, and the subset is dynamically updated during model training. We show that DynaMS converges with large probability, and for the first time show both in theory and practice that dynamically updating the subset can result in better generalization. To reduce the additional computation incurred by the selection, a light parameter sharing proxy (PSP) is designed. PSP is able to faithfully evaluate instances following the underlying model, which is necessary for dynamic selection. Extensive analysis and experiments demonstrate the superiority of the proposed approach in data selection against many state-of-the-art counterparts on benchmark datasets.
1 INTRODUCTION
Deep learning has achieved great success owing in part to the availability of huge amounts of data. Learning with such massive data, however, requires clusters of GPUs, special accelerators, and excessive training time. Recent works suggest that eliminating non-essential data presents promising opportunities for efficiency. It is found that a small portion of training samples 1 contributes a majority of the loss (Katharopoulos & Fleuret, 2018; Jiang et al., 2019), so redundant samples can be left out without sacrificing much performance. Besides, the power law nature (Hestness et al., 2017; Kaplan et al., 2020) of model performance with respect to the data volume indicates that loss incurred by data selection can be tiny when the dataset is sufficiently large. In this sense, selecting only the most informative samples can result in better trade-off between efficiency and accuracy.
The first and foremost question for data selection is about the selection strategy. That is, how to efficiently pick training instances that benefit model training most. Various principles have been proposed, including picking samples that incur larger loss or gradient norm (Paul et al., 2021; Coleman et al., 2020), selecting those most likely to be forgotten during training, as well as utilizing subsets that best approximate the full loss (Feldman, 2020) or gradient (Mirzasoleiman et al., 2020; Killamsetty et al., 2021). Aside from selection strategies, existing approaches vary in the training schemes which can be divided roughly into two categories: static ones and dynamic (or adaptive) ones. Static methods (Paul et al., 2021; Coleman et al., 2020; Toneva et al., 2019) decouple the subset selection and the model training, where the subset is constructed ahead and the model is trained on such a fixed subset. Dynamic methods (Mindermann et al., 2022; Killamsetty et al., 2021), however, update the subset in conjunction with the training process. Though effectively eliminates amounts of samples, it is still not well understood how these different training schemes influence the final model.
∗Corresponding author 1We use the terms data, sample, and instance interchangeably
In this paper, we propose dynamic margin selection (DynaMS). For the selection strategy, we inquire the classification margin, namely, the distance to the decision boundary. Intuitively, samples close to the decision boundary influence more and are thus selected. Classification margin explicitly utilizes the observation that the decision boundary is mainly determined by a subset of the data. For the training scheme, we show the subset that benefits training most varies as the model evolves during training, static selection paradigm may be sub-optimal, thus dynamic selection is a better choice. Synergistically integrating classification margin selection and dynamic training, DynaMS is able to converge to the optimal solution with large probability. Moreover, DynaMS admits theoretical generalization analysis. Through the lens of generalization analysis, we show that by catching the training dynamics and progressively improving the subset selected, DynaMS enjoys better generalization compared to its static counterpart.
Though training on subsets greatly reduces the training computaiton, the overhead introduced by data evaluation undermines its significance. Previous works resort to a lighter proxy model. Utilizing a separate proxy (Coleman et al., 2020), however, is insufficient for dynamic selection, where the proxy is supposed to be able to agilely adapt to model changes. We thus propose parameter sharing proxy (PSP), where the proxy is constructed by multiplexing part of the underlying model parameters. As parameters are shared all along training, the proxy can acutely keep up with the underlying model. To train the shared network, we utilize slimmable training (Yu et al., 2019) with which a well-performing PSP and the underlying model can be obtained in just one single train. PSP is especially demanding for extremely large-scale, hard problems. For massive training data, screening informative subset with a light proxy can be much more efficient. For hard problems where model evolves rapidly, PSP timely updates the informative subset, maximally retaining the model utility.
Extensive experiments are conducted on benchmarks CIFAR-10 and ImageNet. The results show that our proposed DynaMS effectively pick informative subsets, outperforming a number of competitive baselines. Note that though primarily designed for supervised learning tasks, DynaMS is widely applicable as classifiers have become an integral part of many applications including foundation model training (Devlin et al., 2019; Brown et al., 2020; Dosovitskiy et al., 2021; Chen et al., 2020), where hundreds of millions of data are consumed.
In summary, the contributions of this paper are three-folds:
• We establish dynamic margin select (DynaMS), which selects informative subset dynamically according to the classification margin to accelerate the training process. DynaMS converges to its optimal solution with large probability and enjoys better generalization.
• We explore constructing a proxy by multiplexing the underlying model parameters. The resulting efficient PSP is able to agilely keep up with the model all along the training, thus fulfill the requirement of dynamic selection.
• Extensive experiments and ablation studies demonstrate the effectiveness of DynaMS and its superiority over a set of competitive data selection methods.
2 METHODOLOGY
To accelerate training, we propose dynamic margin selection (DynaMS) whose framework is presented in Figure 1. Instances closest to the classification decision boundary are selected for training, and the resulting strategy is named margin selection (MS). We show that the most informative subset changes as the learning proceeds, so that a dynamic selection scheme that progressively improves the subset can result in better generalization. Considering the computational overhead incurred by selection, we then explore parameter sharing proxy (PSP), which utilizes a much lighter proxy model to evaluate samples. PSP is able to faithfully keep up with the underlying model in the dynamics selection scheme. The notations used in this paper are summarized in Appendix H
2.1 SELECTION WITH CLASSIFICATION MARGIN
Given a large training set T = {xi, yi}|T |i=1, data selection extracts the most informative subset S ⊂ T trained on which the model f(x) yields minimal performance degradation. Towards this end, we utilize the classification margin, that is, the distance to the decision boundary, to evaluate the informativeness of each sample. |S| examples with the smallest classification margin are selected.
Intuitively, these samples should be influential most to the model decision. Following (Mickisch et al., 2020; Emam et al., 2021), the decision boundary between two classes c1 and c2 ∈ {1, . . . C} is B := {x | fc1(x) = fc2(x)}, where fc(x) is the c entry of model output, indicating the probability of x belonging to class c. The classification margin is then:
M(x, c1, c2) = min δ
∥δ∥2 s.t. x+ δ ∈ B (1)
which is the minimal perturbation required to move x form c1 to c2. Directly computing the margin is infeasible for deep neural networks, so scoring is conducted in the feature space instead as in (Emam et al., 2021). Typically neural networks applies a linear classifier on top of the features (Goodfellow et al., 2016), so the classification margin M(x, c1, c2) can be easily obtained as: M(x, c1, c2) = (Wc1 −Wc2)⊤h(x)/ ∥Wc1 −Wc2∥2, where W ∈ Rd×C is the weight of the linear classifier 2 and h(x) is the feature of x. In this way, the classification margin of a labeled sample (x, y) along class c is M(x, y, c) if y ̸= c or minc̸̃=y M(x, y, c̃) if y = c. The former indicates the distance moving (x, y) to class c while the latter is the distance moving (x, y) to the nearest class other than y. To keep the subset balanced, we evenly pick |S|/C samples with the smallest classification margin along each class. The resulting strategy is named margin selection (MS), denoted as MS(w, T , |S|). The procedure is detailed in Algorithm 1 in Appendix A.
2.2 DYNAMIC SELECTION
Given the subset selected, model is subsequently trained on S. Conventional static training scheme assumes that the optimal subset converges and is not related to the model training dynamic (Paul et al., 2021; Coleman et al., 2020). Though effectively eliminate instances, the "converged optimal
2Without loss of generality, we omit the bias term for notation clarity
subset" assumption may be too strong. To investigate whether the most informative samples vary during training, we plot the overlap ratio of samples selected in two consecutive selections during the training of ResNet models, shown in Figure 2(a). We train for 200 epochs and 120 epochs on CIFAR-10 and ImageNet respectively, and conduct selection every 10 epochs. It can be observed that the overlap ratio is on average 0.83 for CIFAR-10 and 0.73 for ImageNet rather than 1.0, meaning that samples that most benefit model training vary as the model evolves. A fixed subset may be outdated after parameter updates, thus yielding sub-optimal results.
We thus resort to a dynamic scheme where data selection is performed after each Q epochs training 3. By selecting in conjunction with training, the informative subset gets updated according to the current model status. For the kth selection, the informative subset Sk is constructed by picking portion γk samples so that |Sk| = γk|T |. The selection ratio γk determines the critical margin κk, where only samples with classification margin smaller than κk are kept. Sk will then be used for training Q epochs. In the following, we provide a convergence analysis of DynaMS and show that DynaMS achieves better generalization by constantly improving the selected subset.
Convergence Analysis We now study the conditions for the convergence of training loss achieved by DynaMS. We use logistic regression (LR) to demonstrate and then show the conditions are well satisfied when LR is used on top of deep feature extractors. We have the following theorem: Theorem. Consider logistic regression f(x) = 1
1+e−w⊤x with N Gaussian training samples x ∼
N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (2)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
The proof is left in Appendix B. Theorem 2.2 indicates that dynamically selecting data based on the classification margin is able to converge and achieve the optima w∗ with large probability. The Gaussian input assumption is overly strong in general, but when the linear classifier is adopted on top of a wide enough feature extractor, the condition is well satisfied because a infinitely wide neural network resembles Gaussian process (Lee et al., 2019; Xiao et al., 2018; de G. Matthews et al., 2018).
Generalization Analysis Recently, (Sorscher et al., 2022) developed an analytic theory for data selection. Assume training data xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o · xi ) . Following static selection, when an estimator w is used to pick samples that have a small classification margin, the generalization error takes the form E(α, γ, θ) in the high dimensional limit. α = |T |d indicates the abundance of training samples before selection; γ determines the selection budget and θ = arccos ( w⊤wo
∥w∥2·∥wo∥
) shows the closeness of the
estimator to the oracle. The full set of self-consistent equations characterizing E(α, γ, θ) is given in Appendix C. By solving these equations the generalization error E(α, γ, θ) can be obtained. We then extend it to the dynamic scheme. For the kth selection, we use the model trained on Sk−1 as the estimator wk, which deviates from oracle by angle θk = arccos ( w⊤k−1wo
∥wk−1∥2·∥wo∥2
) , to evaluate
and select samples. The resulting subset Sk will be used for subsequent training of model wk+1, which will later be used as an estimator at k + 1 to produce Sk+1. In this way, generalization of dynamic scheme can be obtained by recurrently solving the equations characterizing E(α, γk, θk) with updated keeping ratio γk and estimator deviation θk. Note that in each round of selection, samples are picked with replacement, so the abundance of training samples α is kept fixed. The keeping ratio γk, determining the subset size, can be scheduled freely to meet various requirements.
3For extremely large dataset case where training can be accomplished within just one or a few epochs, the selection can be performed every Q iterations
We compare the generalization of dynamic selection and its static counterpart in Figure 2(b). We show the landscape of E(α, γ, θ) with different γ and θ by solving the generalization equations numerically. α = 3.2 is kept fixed, which means the initial training data is abundant; We use static training with θs = 40◦ and γs = 0.6 as control group. To make the comparison fair, we make sure 1 K ∑K k=1 |γk| = γs, so that the averaged number of samples used in the dynamic scheme equals the subset size used in the static scheme. From Figure 2(b), we see that in dynamic selection, the estimator gets constantly improved (θk decreases), so that the subsets get refined and the model achieves better generalization. Discussion on selecting with different α, γand θ is given in Appendix D.
2.3 PARAMETER SHARING PROXY
With dynamic selection, the number of updates is reduced. However, the computational overhead incurred by data selection undermines its significance, especially when the model is complex and samples are evaluated frequently. Aside from designing efficient selection strategies, previous works explored utilizing a lighter model as proxy to evaluate the instances so that the problem can be ameliorated. Pretrain a separate proxy and evaluate instances prior to model training (Coleman et al., 2020), however, is insufficient for dynamic selection, as a static proxy can not catch the dynamics of the underlying model. A proxy that fulfills the requirements of dynamic selection is still absent.
We thus propose parameter sharing proxy (PSP), where part of the model is used as the proxy. Taking convolutional neural network as an example, for a layer with kernel W ∈ Rci×co×u×u, where ci, co and u are number of input filters, number of output filters and kernel size respectively, the corresponding kernel of proxy is then: Wproxy = W1:pci,1:pco,:,:, where p ∈ [0, 1] is a slimming factor. As shown in Figure 3, the proxy kernel is constructed with the first pci input channels and first pco output channels. A p times thinner proxy can be obtained by applying p to each layer.
With separate batch normalization for proxy and model, PSP forms a slimmable network (Yu et al., 2019), where multiple models of different widths are jointly trained and they all yield good performance. As the parameters are shared, the proxy can acutely keep up with the model change, thus applicable for dynamic selection. We further investigate the gradients alignment of the proxy and the original model through their cosine similarity:
cos(g, gproxy) = g⊤gproxy
∥g∥2 · ∥g∥2 , where g = ∇WL (W) , gproxy = ∇WL (Wproxy) (3)
A positive cosine value indicates gproxy stands in the same side with g, thus updates on proxy and the model benefits each other. We compare the gradient alignment of PSP and a stand-alone proxy in Figure 2(c) on ResNet-50. With p = 0.5, we see that cos(g, gproxy) for PSP is much larger than the stand-alone proxy. Given the well-aligned gradients, PSP requires fewer training epochs. Overall workflows of DynaMS and DynaMS+PSP is shown in Algorithm 2 and Algorithm 3 of Appendix A. PSP is especially advantageous for large and hard problems. When the data is extremely large, training PSP on a small subset is cheaper than evaluating the extremely large training set with the original model, making it much more efficient. When the task is hard and model changes rapidly during training, PSP can timely updates the informative subset, maximally retaining the model utility.
3 RELATED WORK
Accelerating training by eliminating redundant training instances has long been a research focus in academia. This is accomplished by adopting an effective selection strategy and an appropriate training scheme. We summarize the related literature from these two strands of research in the following.
Selection Strategy Sample selection can be accomplished with various principles. (Loshchilov & Hutter, 2015; Jiang et al., 2019; Paul et al., 2021) tend to pick samples that incur large loss or gradient norm (CE-loss, EL2N, GraNd). (Toneva et al., 2019) inspects the “unforgettable”
examples that are rarely misclassified once learned, and believes these samples can be omitted without much performance degradation. Other works adopt uncertainty. Samples with the least prediction confidence are preferred (Settles, 2010). Recently, (Mirzasoleiman et al., 2020; Killamsetty et al., 2021) select subset that best covers or approximates the full gradient (Craig, GradMatch). However, these requires per-sample gradient as well as an additional optimization which is expensive both in run-time and in memory. Our work utilizes the classification margin to identify informative samples,
which is efficient and can synergistically adapt to various training schemes. Comparison of these strategies is given in Table 1, where d is the dimension of data feature. MS is slightly slower than selection via loss (CE-loss and EL2N), but much more efficient than Craig and GradMatch. Here we consider only the complexity of the selection strategy itself, time spent for feature extraction is not included. Classification margin has been previously explored in the active learning literature (Ducoffe & Precioso, 2018; Emam et al., 2021), here we utilize it for training acceleration.
Training Schemes Data selection brings more options to training. Under the conventional static training scheme (Paul et al., 2021; Toneva et al., 2019; Coleman et al., 2020), data selection is conducted prior to model update, and the informative subset is kept fixed. Contrastively, online batch selection picks batch data each iteration (Loshchilov & Hutter, 2015; Alain et al., 2015; Zhang et al., 2019; Mindermann et al., 2022). Though sufficiently considered the training dynamics, the overly frequent sample evaluation incurs prohibitive computational overhead. Recently, (Killamsetty et al., 2021) tried selecting after several epochs’ training, which is similar to our dynamic scheme. However, the dynamic training scheme is just utilized as a compromise to avoid overly frequent selection. A formal analysis of its advantage over the static scheme is absent.
By systematically considering the selection strategy, the model training, as well as the proxy design, Our proposed DynaMS forms an effective data selection framework for efficient training.
4 EXPERIMENTS
In this section, we first analyse the effectiveness of each design ingredient in Section 4.2. Then we compare to state-of-the-art algorithms in Section 4.3. Code is available at https://github. com/ylfzr/DynaMS-subset-selection.
4.1 EXPERIMENTAL SETUP
We conduct experiments on CIFAR-10 Krizhevsky & Hinton (2009) and ImageNet Jia et al. (2009), following standard data pre-processing in He et al. (2016). A brief summarization of the experimental setup is introduced below, while complete hyper-parameter settings and implementation details can be found in Appendix F.
CIFAR-10 Experiments For CIFAR-10, we train ResNet-18 (He et al., 2016) for 200 epochs. Selection is conducted every 10 epochs, so overall there are 19 selections (K = 19). For subset size, we adopt a simple linear schedule: γk = 1− k · a for k = 1, . . . ,K, where a determines the reduction ratio. We make sure γavg = 1K ∑K k=1 γk = γs. In this way, the averaged number of data used in the dynamic scheme (γavg) is kept equal to that of static training (γs) for fair comparison. For 0.6× acceleration, a = 0.042. We conduct experiments on a NVIDIA Ampere A-100.
ImageNet Experiments For ImageNet, we choose ResNet-18 and ResNet-50 as base models. Following the conventions, the total training epoch is 120. Selection is also conducted every 10 epochs, so altogether K = 11. For subset size, aside from the linear schedule, we also explore a power schedule where γk decays following a power law: γk = m · k−r + b for k = 1, 2, . . . ,K. For 0.6× acceleration, we set m = 0.398, r = 0.237 and b = 0.290. Please see Appendix F for
more details. The power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. We conduct experiments on four NVIDIA Ampere A-100s.
4.2 ABLATION STUDIES
We use ResNet-50 on ImageNet to illustrate the effect of each ingredient in DynaMS, that is, the classification margin criteria, the dynamic training scheme as well as the parameter sharing proxy.
The effect of classification margin selection To inspect the effect of classification margin selection (MS), we compare MS against two widely applied selection strategies CE-loss (Loshchilov & Hutter, 2015; Jiang et al., 2019) and EL2N (Paul et al., 2021). CE-loss selects samples explicitly through the cross-entropy loss they incur while EL2N picks samples that incur large L2 error. We compare the three under the conventional static scheme so any other factors aside from the selection strategy is excluded. Samples are evaluated after 20 epochs of pretraining. The model is then reinitialized and trained on the selected subset, which contains 60% original samples. As shown in Table 2, MS achieves the best accuracy among the three, validating its effectiveness.
The effect of dynamic training We then apply dynamic selection on MS, where the average subset size is also kept to be 60% of the original dataset. From Table 2 we see that DynaMS outperforms MS by 1.67%, which is significant on large scale dataset like ImageNet. The superiority of DynaMS validates that by constantly improving the model and updating the subset, dynamic selection scheme can result in better performance. Note that DynaMS can be more practical since it does not require the 20 epochs training prior to selection as required in the static scheme.
The effect of parameter sharing proxy We now study the parameter sharing proxy (PSP). An effective proxy is supposed to be faithful, and can agilely adapt to model updates. In Figure 4, we plot the Spearman rank correlation as well as the overlap ratio of samples selected with the proxy and the model. We see that all along the training, the rank correlation is around 0.68, and over 78% samples selected are the same, indicating that the proxy and the model are fairly consistent. We then investigate how will the complexity, measured by floating point operations (FLOPs), of proxy affect. We enumerate over the slimming factor p ∈ {0.25, 0.5, 0.75, 1.0} to construct proxies of different widths, the corresponding FLOPs are 6.25%, 25.00%, 56.25%, 100% respectively. In Table 3, we see that significant computation reduction can be achieved with moderate performance degradation.
4.3 COMPARISONS WITH STATE-OF-THE-ARTS
Finally, we compare DynaMS against various state-of-the-art methods. Aside from CE-loss and EL2N, Random picks samples uniformly at random. GraNd (Paul et al., 2021) select samples that incur large gradient norm. Forget (Toneva et al., 2019) counts how many times a sample is mis-classified (forget) after it is learned. Samples more frequently forgotten are preferred. We evaluate the forget score after 60 epochs training. To avoid noisy evaluation, many of these static selection approaches ensembles networks before selection. The number of ensambled models is given by the subscription. Auto-assist (Zhang et al., 2019) select samples that incur large loss value on a small proxy. Selection is conducted in each iteration thus forming an online batch selection (OLBS) scheme. DynaCE and DynaRandom apply the corresponding selection strategy, but are trained in a dynamic way. CRAIG and GradMatch propose to reweight and select subsets so that they best cover or approximate the full gradient. In the experiments, we use the per-batch variant of CRAIG and
Figure 4: Correlation of proxy and model.
20 40 60 80
0.66
0.68
0.70
0.72
Sp ea
rm an
C or
re la
ti on
20 40 60 80 Epochs
0.76
0.77
0.78
0.79
0.80
O ve
rl ap
R at
io
GradMatch proposed in (Killamsetty et al., 2021) with 10 epoch warm start 4. The two approaches utilize dynamic selection scheme, all the training settings are kept the same as our DynaMS.
In table 4, average accuracy from 5 runs on CIFAR-10 as well as their running time are reported. Due to limited space, the standard deviation is given in Appendix E. We see that DynaMS achieves comparable performance against the strongest baselines (EL2N10, GraNd10, Forget10) while being more efficient. Note that the static methods require pretraining one or several models for 20 epochs before selection. Considering this cost (subscript of the reported running time), the acceleration of these methods is less significant. We also compare two online batch selection methods, OnlineMS and Auto-assist (Zhang et al., 2019). OnlineMS picks samples with MS, but the selection is conducted each iteration. OnlineMS didn’t outperform DynaMS, meaning more frequent selection is not necessary. Rather, selecting at each optimization step incurs prohibitive computational overhead. Auto-assist didn’t get good performance in this experiment. This may results from the overly simple proxy. The logistic regression proxy adopted may not sufficiently evaluate the candidate samples.
4For cifar-10, we use the published implementation from https: //github.com/decile-team/cords. For ImageNet, we modify the implementation to the distributed setting.
For ImageNet, we also report the average accuracy from 5 runs as well as their running time. The standard deviation is given in Appendix E DynaMS outperforms all the baselines. For instance, it achieves 68.65% and 74.56% top-1 accuracy given on average 60% samples for ResNet-18 and ResNet-50 respectively, surpassing the most competitive counterpart Forget by 0.81% and 1.06%. Compared to the static methods which require additional pretraining, 60 epochs for Forget and 20 for the others, DynaMS is much more efficient. CRAIG and GradMatch didn’t get good performance on ImageNet. This might because we use the per-batch variant in (Killamsetty et al., 2021), and set batch size 512 in order to fit the per-sample gradients into memory. The per-batch variant treats each mini-batch as one sample and selects mini-batches during the gradient matching process. So a larger batch size means more coarse grain selection which may lead to inferior performance. We also compare a variant DynaRandom. DynaRandom adopts the dynamic selection scheme but a random subset is constructed at each selection. DynaMS outperforms DynaRandom by 1.06% and 1.93% for ResNet-18 and ResNet-50 respectively, indicating that the superiority of DynaMS over static methods comes from effectively identifying informative samples instead of witnessing more data.
ResNet-50 is rather complex and the data evaluation time is non-negligible. We thus apply parameter sharing proxy to reduce the evaluation time. The proxy is 0.5× width so the evaluation requires around 0.25× computation compared to the original model. As the gradients of the proxy and the underlying model are well aligned, we only train DynaMS+PSP for 90 epochs. From table 5, though utilizing a proxy harms performance compared to DynaMS, it still outperforms all the other baselines. Specifically, SVP also uses a proxy for sample evaluation. The proxy, however, is a statically fully trained ResNet-18. The superiority of DynaMS+PSP over SVP shows the necessity of a dynamic proxy that agilely keeps up with the change of underlying model. The advantages of DynaMS+PSP over DynaMS on efficiency can be significant for extremely large scale problems where massive data is available while only a small fraction of data is sufficient for training. To further demonstrate DynaMS, we draw the accuracy curvature of ResNet-50 against different (on average) sample budgets from 60% to 100% in Figure 5. It can be found that our DynaMS consistently outperforms all the other data selection strategies on different budgets. Finally, To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. See G for more details.
5 CONCLUSION
In this paper, we propose DynaMS, a general dynamic data selection framework for efficient deep neural network training. DynaMS prefers samples that are close to the classification boundary and the selected "informative" subset is dynamically updated during the model training. DynaMS has a high probability to converge and we pioneer to show both in practice and theory that dynamic selection improves the generalization over previous approaches. Considering the additional computation incurred by selection, we further design a proxy available for dynamic selection. Extensive experiments and analysis are conducted to demonstrate the effectiveness of our strategy.
A APPENDIX
A ALGORITHM PROCEDURE
Algorithm 1 outlines the procedure of margin selection (MS). In MS, distances of the current sample (x, y) to each other class c are computed. If y ̸= c, the classification margin of (x, y) and class c is M(x, y, c), which is the distance of moving x from class y to class c. If y = c, the classification margin is minc̃ ̸=y M(x, y, c̃), which corresponds to the distance moving (x, y) to another class that is the most close to x. For the whole candidate set T , this generates a |T | × C score matrix. After the classification margins are obtained, |S|/C samples with the smallest classification margin along each class are picked. This keeps samples collected in the subset balanced.
Algorithm 1 Margin selection: MS(w, T , γ) Input:
Candidate set T , keeping ratio γ, number of classes C; Network with weights w, including weights of the final classification layer W ;
Output: Selected subset according to the classification margin S.
1: Compute the keeping budget |S| = γ · |T |, initialize the subset S = {} // Evaluating: compute the classification margin. 2: for (x, y) ∈ T do 3: for c = 1 : C do 4: Compute the classification margin of the sample to the (y, c) boundary:
M(x, y, c) =
{ minc̸̃=y M(x, y, c̃) y = c
M(x, y, c) y ̸= c (4)
5: end for 6: end for
// Selecting: pick the samples according to classification margin (Equation 4.) 7: for c = 1 : C do 8: Pick |S|/C samples which have the smallest classification margins (M(·)): Top|S|/C(c). 9: S = S ⋃ Top|S|/C(c)
10: Remove the already selected samples from the candidate set: T = T − Top|S|/C(c) 11: end for
Algorithm 2 Dynamic margin selection (DynaMS) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wt, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via stochastic gradient descent on Sk.
10: end for
Algorithm 3 Dynamic margin selection (DynaMS) with parameter sharing proxy (PSP) Input:
Training data T ; Base network with weights W , learning rate η Keep ratio of each selection γk where k = 1, ...,K, selection interval Q Slimming factor of the proxy r, thus the proxy weights Wproxy is determined.
Output: Model efficiently trained on selected subsets.
1: k = 1; γk = 1 thus Sk = T 2: for epochs t = 1, ..., T do 3: if t % Q == 0 then 4: Select subset, Sk = MS(Wtproxy, T , γk). 5: k = k + 1 6: else 7: Keep subset Sk. 8: end if 9: Update W via optimizing L(W) + L(Wproxy) on Sk. (Slimmable training)
10: end for
A full workflow of efficient training with the proposed dynamic margin selection (DynaMS) is shown in Algorithm 2. The model is first trained on the full dataset T for Q epochs to warm up. Subset selection kicks in each Q epochs, samples are evaluated with the current model so the informative subset gets updated according to the distance of samples to the classification boundary. After selection, the model is trained on the selected subset until the next selection. The workflow incorporating parameter sharing proxy is shown in Algorithm 3. Different from naive DynaMS, samples are evaluated and selected with the proxy instead of the underlying model. During the Q epochs’ training, the proxy and the original model are updated simultaneously with slimmable training (Yu et al., 2019).
B PROOF FOR THEOREM 2.2
To prove Theorem 2.2, we first inspect the norm of x. We get the following lemma.
Lemma 1. For Gaussian data x ∼ N (0,Σ), let µ > 0, T > 1 be constants, d the dimension of x and λ the largest eigenvalue of the covariance Σ, then with probability at least 1 − 1µTd , ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 .
Proof of Lemma 1. For x ∼ N (0,Σ), ∥x∥22 follows a generalized chi-squared distribution. The mean and variance can be computed explicitly as E[∥x∥22] = trΣ = ∑ j λj and Var(∥x∥ 2 2) =
2trΣ2 = 2 ∑
j λ 2 j . By Chebyshev’s inequality, we have
Pr ∥x∥22 <∑λj +√µTd√2∑ j λ2j > 1− 1 µTd
where µ > 0 and T > 1 are constants and d is the dimension of x. Then as ∑ λj + √ µTd √ 2 ∑ j λ 2 j ≤ (1 + √ 2µT )dλ where λ = maxj λj is the largest eigenvalue of the
covariance Σ, we have: Pr ( ∥x∥2 < √ dλ(1 + (2µ) 1 4 )T 1 4 ) > 1− 1
µTd (5)
Then we can start proving Theorem 2.2.
Theorem. Consider logistic regression f(x) = 1 1+e−w⊤x with N Gaussian training samples x ∼ N (0,Σ), x ∈ Rd. Assume ∥w∥2 ≤ D and N d < α. Let w ∗ be the optimal parameters and λ be
the largest eigenvalue of the covariance Σ. For t ∈ {1, . . . T} and constants ε > D √
λ 2 − 1, ζ >
1, µ >> α, select subset with critical margin κt = (1 + ε) log(ζT − t) and update parameters with learning rate η = DN
E √ T . Then with probability at least 1− αµ
min t
L(wt)− L(w∗) ≤ DE ( 1
T 1 4
+ cε,ζ
T 3 4+ε + cε,ζ,λ T β
) (6)
where E = √ dλ(1 + (2µ) 1 4 ), β = (1+ε) 2
2D2λ − 1 4 , cε,ζ and cε,ζ,λ are constants depending on ε, ζ and
λ.
Proof of Theorem 2.2. For logistic regression f(x) = 1 1+e−w⊤x with loss function
L = 1 N N∑ i=1 ℓi = 1 N N∑ i=1 −yi log ŷi − (1− yi) log(1− ŷi) (7)
Where ŷi is the predicted value. The gradient incurred training on the selected subset is then:
∂Lκ ∂w = 1 N N∑ i=1 (ŷi − yi)xi · I(|w⊤xi| < κ)
For those |w⊤xi| ≥ κ or "easy" samples, we have | sgn(yi − 12 ) ·w ⊤xi| ≥ κ and with probability at least 1− 1µTd ∥∥∥∥ ∂ℓi∂w ∥∥∥∥ 2 ≤ { E·T 1 4 1+eκ if sgn(yi − 1 2 ) ·w ⊤xi ≥ κ E · T 14 if sgn(yi − 12 ) ·w ⊤xi ≤ −κ (8) where E = √ dλ(1 + (2µ)
1 4 ). Note that the condition sgn(yi − 12 ) · w ⊤xi ≤ −κ means xi is misclassified by w as well as the margin is at least κ. Denote the portion of this kind of misclassified sample in the whole training set by r, we have the estimate of the gradient gap
Errt = ∥∥∥∥∂Lκ∂w − ∂L∂w ∥∥∥∥ 2 = 1 N ∥∥∥∥∥∥ ∑
|w⊤x|≥κ
∂ℓ
∂w (x) ∥∥∥∥∥∥ 2
≤ET 1 4 (1− γt) 1 + eκt + ET 1 4 (1− γt)rt
(9)
Where γt is the fraction of data kept by selecting with margin κt. The inequality holds with probability at least (1− 1µTd ) N > 1− αµT because of Equation 8.
Note that Lemma 1 also suggest ∥∥ ∂ℓ ∂w ∥∥ 2 ≤ E · T 14 with large probability, therefore L is highly likely to be Lipschitz continuous with parameter ET 1 4 . By setting a constant learning rate η = DN
E √ T , and critical margin κt = (1+ε) log(ζT −t), ζ > 1, we have with probability at least ( 1− αµT )T ≥ 1− αµ min t L(wt)− L(w∗) ≤ DE
NT 1 4
+ D
T T−1∑ t=1 Errt
≤ DE NT 1 4 + DE T 3 4 T−1∑ t=1
1
(ζT − t)1+ε +
DE
T 3 4 T−1∑ t=1 rt
≤ DE T 1 4
( 1
N +
cε,ζ
T ε √ T
) + DE
T 3 4 T−1∑ t=1 rt
(10)
The first inequality follows the Theorem 1 in (Killamsetty et al., 2021). The last inequality holds because ∑T−1 t=1 1 (ζT−t)1+ε ≤ ∫ ζT (ζ−1)T 1 s1+ε ds ≤ cε,ζ T ε with cε,ζ = 1 ε(ζ−1)ε ,∀ε > 0 and ζ > 1.
To bound the sum of classification error (the last term of Equation 10), again we utilize the data distribution prior. Note that the data points contribute to r are quantified by the following set:
E = {w⊤o x > 0 ∧w⊤x < −κ} ∪ {w⊤o x < 0 ∧w⊤x > κ} := E1 ∪ E2
where wo is the oracle classifier such that the true label is generated according to y = sgn(w⊤o x). Let ϕ represent the probability density function of standard Gaussian, we see that
r = ∫ E ϕ(x|Σ)dx = 2 ∫ E1 ϕ(x|Σ)dx
≤2 ∫ {w⊤·x<−κ} ϕ(x|Σ)dx = 2Φ ( − κ√ w⊤Σw ) ≤2Φ ( − κ
D √ λ
)
where λ is the largest eigenvalue of Σ. Therefore, we have the following estimation:
1
T 3 4 T−1∑ t=1 rt ≤ 1 T 3 4 T−1∑ t=1 2Φ ( − κt D √ λ )
≤ 2 T 3 4 T−1∑ t=1 ϕ(κt/(D √ λ)) κt/(D √ λ)
(Gaussian upper tail bound)
= 2D √ λ√
2π(1 + ε)
1
T 3 4 T−1∑ t=1
1
log(ζT − t) e−
(1+ε)2 2D2λ log2(ζT−t)
≤ 2D √ λT 1 4
√ 2π(1 + ε)
1 log((ζ − 1)T + 1) 1
((ζ − 1)T + 1) (1+ε)2 2D2λ log((ζ−1)T+1)
≤ cε,ζ,λT−β
(11)
where β = (1+ε) 2
2D2λ − 1 4 and we assume log((ζ − 1)T + 1) = Ω(1) with respect to T . Together we
prove the theorem 2.2.
C GENERALIZATION
Sorscher et al. (2022) analysed the generalization of static training scheme in the teacher-student perceptron setting, where the teacher is an "oracle" generating labels. For the training set T = {xi, yi}|T |i=1, assume xi ∼ N (0, I) and there exists an oracle model wo ∈ Rd which generates the labels such that yi = sign ( w⊤o xi ) for all i. Without loss of generality, the oracle is assumed to be drawn form a sphere. Sorscher et al. (2022) works in a high dimensional statistics where |T |, d → ∞ but the ratio α = |T |/d remains O(1). Following the static training scheme, a lower fidelity estimator westimate which has angle θ relative to the oracle wo is used to evaluate the candidate instances, and those with smaller classification margin |w⊤estimatexi| along the estimator westimate are picked. The selection results in a subset S. S follows p(z), a truncated Gaussian distribution along westimate, while the other directions are still kept isotropic. More specifically, given a keeping ratio γ, the corresponding selection margin is
κ = H−1 ( 1−γ 2 ) and thus the subset distribution along westimate is p(z) = e −z2/2 √ 2πγ
Θ(κ−|z|), where Θ(x) is the Heaviside function and H(x) = 1 − Φ(x) where Φ(x) is the cumulative distribution function (CDF) of standard Gaussian.
The generalization error of the model trained on the subset S takes the form E(α, γ, θ). That is, the error is determined by γ the keeping ratio, α which indicates the abundance of training samples before selection, and θ which shows the closeness of the estimator to the oracle model. The full set of self-consistent equations characterizing E(α, γ, θ) is given as
R− ρ cos θ sin2 θ = α πΛ 〈∫ ν −∞ dτ exp ( −∆(τ, z) 2Λ2 ) (ν − τ) 〉 z
1− ρ 2 +R2 − 2ρR cos θ
sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ ) (ν − t)2 〉 z
ρ−R cos θ sin2 θ = 2α 〈∫ ν −∞ dτ e − (τ−ρz) 2 2(1−ρ2) √ 2π √ 1− ρ2 H ( Γ(τ, z)√ 1− ρ2Λ )( z − ρτ 1− ρ2 ) (ν − τ)
+ 1
2πΛ exp
( −∆(τ, z)
2Λ2 )( ρR− cos θ 1− ρ2 ) (ν − τ) 〉 z
(12) Where,
Λ = √ sin2 θ −R2 − ρ2 + 2ρR cos θ
Γ(t, z) = z(ρR− cos θ)− τ(R− ρ cos θ) ∆(t, z) = z2 ( ρ2 + cos2 θ − 2ρR cos θ ) + 2τz(R cos θ − ρ) + τ2 sin2 θ
τ is an auxiliary field introduced by Hubbard-Stratonovich transformation. ⟨·⟩z denotes expectation on p(z). By solving these equations the generalization error can be easily read off as E = cos−1(R)/π, where R = w
⊤wo ∥w∥2·∥wo∥ .
D MORE RESULTS ON GENERALIZATION
Er ro
r
Er ro
r
To better understand the generalization under classification margin selection E(α, γ, θ), we provide more results to individually inspect the effect of (on average) select ratio γavg, initial data abundance α and the closeness of the estimator to the oracle mode θ. As shown in Figure 6(a), we changed γavg from 60% to 50%, thus constructing a smaller selection budget case. In Figure 6(b), we use α = 2.1 instead of α = 3.2 to construct a less abundant data case, where the data before selection is insufficient. In Figure 6(c), we start selecting samples using a better estimator θ = 30◦ instead of θ = 40◦. All the other hyper-parameters aside from the inspected one are kept consistent to those used Figure 2(b), that is, γavg = 0.6, α = 3.2 and θ = 40◦. We see that with various γavg and θ,
DynaMS outperforms its static counterpart. The abundance of initial data, however, significantly
affects. When data is insufficient, data selection, both static as well as dynamic cause obvious performance degradation. Figure 7 shows a even more serious α = 1.7, the generalization landscape is significantly changed and data selection is not recommended in this case.
E COMPARISON WITH STANDARD DEVIATION
We test each method in Table 4 and Table 5 5 times. The averaged accuracy and standard deviation are reported below in Table 6 and Table 7.
F IMPLEMENTATION DETAILS AND HYPER-PARAMETERS
Subset size schedule Dynamic Selection admits more freedom in subset size schedule. In the experiments we consider the linear schedule and the power schedule. For linear schedule, the keeping ratio is determined by γk = 1− k · a for k = 1, 2, . . . ,K, where a determines the sample reduction
ratio. γ is supposed to satisfy γavg = 1K ∑K
k=1 γk = γs where γs is the selection ratio when a static training scheme is applied. Thus 1K ∑K k=1 |Tk| = |S|, meaning the averaged number of data used in the dynamic scheme is kept equal to that of static training.
Aside from the linear scheduler, we also explore a power schedule where γk = m · k−r + b for k = 1, 2, . . . ,K. Power schedule reserves more samples in late training, preventing performance degradation caused by over data pruning. Determining these hyper-parameters m, r, b is a bit tricky, we just require γ1 = 1.0 to warm start and γavg = 1K ∑K k=1 γk = γs for fair comparison. γK should not be overly small, we empirically find γK ≈ γ − 0.1 yield good results. For different budget γs = {0.6, 0.7, 0.8, 0.9} the hyper-parameters are given in Appendix F, Table 8. Post process is carried out to make sure the resulting subset size sequence satisfy the above requirements.
(Killamsetty et al., 2021) utilize a constant schedule, where in each selection the subset size is kept constant as γs · |T |. This schedule however, do not admit selection without replacement. Linear and power schedule are all monotonically decreasing, thus are natural choices considering this. Figure 8 plots the three schedules on γs = 0.6 budget. In this paper we just provide a primary exploration on the subset size schedule, in depth study on the relationship between the subset size and the model performance as well as an automatic way determining the optimal subset size schedule is left for future work.
Hyper-parameters Finally, the detailed hyper-parameters for DynaMS on both CIFAR-10 and ImageNet datasets are shown in Table 8. Note that for DynaMS+PSP, the Max Epochs is set to be 90 on ImageNet.
Table 8: Hyper-parameters of DynaMS for different models on CIFAR-10 and ImageNet.
Hyper-parameters CIFAR-10 ImageNet
ResNet-18 ResNet-18 ResNet-50
Batch Size 128 512 512 Init. Learning Rate of W 0.1 0.1 0.1 Learning Rate Decay Stepwise 0.2 Stepwise 0.1 Stepwise 0.1 Lr Decay milestones {60,120,160} {40,80} {40,80} Optimizer SGD SGD SGD Momentum 0.9 0.9 0.9 Nestrov True True True Weight Decay 5e-4 1e-4 1e-4 Max Epochs 200 120 120 Selection interval 10 10 10
Power Scheduler -
60%: m = 0.3984, r = 0.2371, b = 0.2895 70%: m = 0.3476, r = 0.2300, b = 0.4275 80%: m = 0.3532, r = 0.1349, b = 0.4978 90%: m = 0.2176, r = 0.1035, b = 0.7078
Linear Scheduler
a = 0.041 60%: a = 0.073 - 70%: a = 0.055 - 80%: a = 0.036 - 90%: a = 0.018
G VISUALIZATION OF DYNAMICALLY SELECTED IMAGES
To get a better understanding of how the selected samples look like and how they change over time, we visualize samples picked in different selection steps along the training. For k = 1, k = 4, k = 7 and k = 10, which corresponds to the 1,4,7 and 10th selection, we randomly visualize selected samples that are absent in the latter selection. E.g. the k = 4 row shows images picked in the 4th selection but not in the 7th selection. From Figure 9, we see that in the early selections, amounts of easy-to-recognize samples are kept. As the training proceeds, these simple images are screened out and the model focuses more on harder samples that are atypical, blurred, or with interfering objects, validating our hypothesis that samples most informative change as the model evolves. Dynamic selection is thus indispensable.
H SUMMARY OF NOTATIONS
Models and Parameters f(·) The model used for classification w Parameters of the model w∗ Optimal model parameter wo Oracle model parameter W Parameter of the linear classifier W Kernel of a convolutional layers g gradient incurred by the model gproxy gradient incurred by the proxy d The dimension of data feature h(·) Feature extractor part of the model f(·) p Slimming factor, deciding the width of the proxy model
Selection schedule a Sample reduction ratio in the linear schedule m, r, b Hyper-parameters controlling the power schedule
Loss Functions L Generic reference to the loss function Data Selection B Decision boundary of linear classifiers
Q Selection interval M The classification margin aka. distance of a sample to decision boundary γk Selection budget, keep ratio of samples for the kth selection γavg The averaged keep ratio of dynamic selection γs Selection budget in static selection. k Selection step K The total number of selections along training E The generalization error of model trained on selected subset θ Relative angle of a model to the oracle model. α Aboundance of data before selection κ Selection margin.
Train t Training epoch T The total number of training epochs, T = Q · (K + 1)
Data Distribution Σ Covariance of a Gaussian distribution λ The largest eigenvalue of the covariance matrix
Hyper-parameters D Upper bound of model parameter norm ε, ζ, µ Constants appear in the convergence bound. | 1. What is the focus and contribution of the paper regarding deep learning?
2. What are the strengths and weaknesses of the proposed selection method for efficient deep network training?
3. Do you have any concerns or questions about the assumptions made in the paper, particularly regarding the Gaussian input assumption?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific parts of the paper that the reviewer finds unclear or missing, such as the "SELECT" function and Power schedule? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors propose a new selection method to make train deep networks efficiently, which dynamically selects a small training set from the original data set. The selecting criterion is based on the classification margin. The authors also propose a lighter proxy model to speed up the selection process. The authors conduct experiments on CIFAR-10 and ImageNet to verify the effectiveness, and the results are over many baselines.
Strengths And Weaknesses
Strength:
The topic of this paper is relatively new and can be used in many applications.
The performance of the proposed paper is strong.
Weakness:
The critical parts of this paper, the “SELECT” function and Power schedule, are missing. There are numerous hyperparameters in the supplementary, but no equation and process in the paper and supplementary.
“The Gaussian input assumption … infinitely a wide neural network resembles Gaussian process”. I think this assumption is against the research problem that uses smaller datasets and models. With limited number of data, the input will not follow Gaussian distribution.
The analysis on computational complexity is good but not enough. The comparison of the total training time between the proposed method and the original method from the ResNet paper is necessary.
Small issues:
In introduction, “redundant samples can be left out without sacrificing performance”. From 4, there is a clear drop in terms of performance, and Top 1 Acc. decreases over 1%.
The Top 1 Acc. results of “Original” in Table 4 and “Full” in Figure 4 are different.
Clarity, Quality, Novelty And Reproducibility
The paper is written in a good format and easy to follow.
Quality and Novelty are Fair.
Since there are some important equations and processes missing, I think this paper is hard to be reproduced. |
ICLR | Title
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
Abstract
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
1 INTRODUCTION
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. In a fundamental form, autoencoders are used for feature extraction, and classical clustering techniques such as k-means are serially applied to the features. Recent deep clustering techniques integrate learning processes of feature extraction and clustering, yielding high performance for large-scale datasets such as handwritten digits Hu et al. (2017); Shaham et al. (2018); Xie et al. (2016); Tao et al. (2018). However, those methods have fallen short when targets become more complex, as in the case of real-world photograph dataset CIFAR-10 Krizhevsky et al. (2009). Several works report powerful representation learning leads to improvement of clustering performance on complex datasets Chang et al. (2017); Wu et al. (2019). Learning representation is a key challenge to unsupervised clustering.
In order to learn representations for clustering, recent works utilize metric learning which automatically learns similarity functions from data Chang et al. (2017); Wu et al. (2019). They assign pseudo-labels or pseudo-graph to unlabeled data by similarity measures in latent space, and learn discriminative representations to cluster data. These works improve clustering performance on real world images such as CIFAR-10 and ImageNet-10, and indicate the impact of representation learning on clustering. Although features from learned similarity function and pseudo-labels work well for clustering, algorithms still seem to be heuristic; we design a novel algorithm which is based on knowledge from established clustering techniques. In this work, we exploit a core idea of spectral clustering which uses eigenvectors derived from similarities.
Spectral clustering has been theoretically and experimentally investigated, and known to outperform other traditional clustering methods Von Luxburg (2007). The algorithm involves similarity matrix construction, transformation from similarity matrix to Laplacian, and eigendecomposition. Based on
eigenvectors, data points are mapped into a lower dimensional representation which carries information of similarities and is preferable for clustering. We bring this idea of eigenvector representation into deep representation learning.
We design the representation learning with two aims: 1) learning similarities among instances; and 2) reducing correlations within features. The first corresponds to Laplacian, and the second corresponds to feature orthogonality constrains in the spectral clustering algorithm. Learning process integrating both is relevant to eigendecomposition of Laplacian matrix in the spectral clustering.
For the first aim, we adopt the instance discrimination method presented in Wu et al. (2018), where each unlabeled instance is treated as its own distinct class, and discriminative representations are learned to distinguish between individual instance classes. This numerous-class discriminative learning enables learning partial but important features, such as small foreground objects in natural images. Wu et al. (2018) showed that the representation features retain apparent similarity among images and improve the performance of image classification by the nearest neighbor method. We extend their work to the clustering tasks. We clarify their softmax formulation works like similarity matrix in spectral clustering under the condition that temperature parameter τ , which was underexplored in Wu et al. (2018), is set to be a larger value .
For the second aim, we introduce constraints which have the effect of making latent features orthogonal. Orthogonality is often an essential idea in dimension reduction methods such as principal components analysis, and it is preferable for latent features to be independent to ensure that redundant information is reduced. Orthogonality is also essential to a connection between proposed method and spectral clustering, as stated in Section 3.4. In addition to a simple soft orthogonal constraint, we design a novel softmax-formulated decorrelation constraint. Our softmax constraint is "softer" than the soft orthogonal constraint for learning independent feature spaces, but realizes stable improvement of clustering performance.
Finally, we combine instance discrimination and feature decorrelation into learning representation to improve the performance of complex image clustering. For the CIFAR-10 and ImageNet-10 datasets, our method achieves accuracy of 81.5% and 95.4%, respectively. Our PyTorch Paszke et al. (2019) implementation of IDFD is available at https://github.com/TTN-YKK/Clustering_ friendly_representation_learning.
Our main contributions are as follows:
• We propose a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties.
• We adapt deep representation learning by instance discrimination to clustering and clarify the essential properties of the temperature parameter.
• We design a softmax-formulated orthogonal constraint for learning latent features and realize stable improvement of clustering performance.
• Our representation learning method achieves performance comparable to state-of-the-art levels for image clustering tasks with simple k-means.
2 RELATED WORK
Deep clustering methods offer state-of-the-art performance in various fields. Most early deep clustering methods, such as Vincent et al. (2010); Tian et al. (2014), are two-stage methods that apply clustering after learning low-dimensional representations of data in a nonlinear latent space. The autoencoder method proposed in Hinton & Salakhutdinov (2006) is one of the most effective methods for learning representations. Recent works have simultaneously performed representation learning and clustering Song et al. (2013); Xie et al. (2016); Yang et al. (2017); Guo et al. (2017); Tao et al. (2018). Several methods based on generative models have also been proposed Jiang et al. (2016); Dilokthanakul et al. (2016). These methods outperform conventional methods, and sometimes offer performance comparable to that of supervised learning for simple datasets. Deep-learning-based unsupervised image clustering is also being developed Chang et al. (2017); Wu et al. (2019); Ji et al. (2019); Gupta et al. (2020); Van Gansbeke et al. (2020).
Several approaches focus on learning discriminative representations via deep learning. Bojanowski & Joulin (2017) found a mapping between images on a uniformly discretized target space, and enforced their representations to resemble a distribution of pairwise relationships. Caron et al. (2018) applied pseudo-labels to output as supervision by k-means and then trained a deep neural network. Donahue et al. (2016) proposed bidirectional generative adversarial networks for learning generative models that map simple latent distributions to complex real distributions, in order for generators to capture semantic representations. Hjelm et al. (2018) proposed deep infomax to maximize mutual information between the input and output of an encoder. Wu et al. (2018) was motivated by observations in supervised learning that the probabilities of similar image classes become simultaneously high. They showed that discriminating individual instance classes leads to learning representations that retain similarities among data.
IIC Ji et al. (2019) and SCAN Van Gansbeke et al. (2020) are two recent works focusing on image clustering and obtained high performance. IIC Ji et al. (2019) directly learns semantic labels without learning representations based on mutual information between image pairs. SCAN Van Gansbeke et al. (2020) focuses on the clustering phase and largely improved performance based on a given pre-designed representation learning. By contrast, we focus on learning a clusteringfriendly representation space where objects can be simply clustered.
Our method exploits the idea of spectral clustering Shi & Malik (2000); Meila & Shi (2001); Von Luxburg (2007); Ng et al. (2002). From one perspective, spectral clustering finds a low dimensional embedding of data in the eigenspace of the Laplacian matrix, which is derived from pairwise similarities between data. By using the embedded representations, we can proceed to cluster the data by the k-means algorithm in the low-dimensional space. Spectral clustering often outperforms earlier algorithms such as k-means once pair similarities are properly calculated. Shaham et al. (2018) incorporated the concept of spectral clustering into deep a neural network structure. Similarities were calculated by learning a Siamese net Shaham & Lederman (2018) where the input positive and negative pairs were constructed according to the Euclidean distance.
3 PROPOSED METHOD
Given an unlabeled dataset X = {xi}ni=1 and a predefined number of clusters k, where xi denotes the ith sample, we perform the clustering task in two phases, namely, representation learning and clustering. This work focuses on the first phase, which aims to learn an embedding function v = fθ(x) mapping data x to representation v so that v is preferable for clustering. fθ is modeled as a deep neural network with parameter θ. We use V = {vi}ni=1 to denote the whole representation set.
3.1 INSTANCE DISCRIMINATION
We apply the instance discrimination method proposed by Wu et al. (2018) to learn clustering-friendly representations that capture similarity between instances. The objective function is formulated based on the softmax criterion. Each instance is assumed to represent a distinct class. For given data x1, . . . , xn, the corresponding representations are v1, . . . ,vn, and data xi is classified into the ith class. Accordingly, the weight vector for the ith class can be approximated by a vector vi. The probability of representation v being assigned into the ith class is
P (i|v) = exp(v T i v/τ)∑n
j=1 exp(v T j v/τ)
, (1)
where vTj v measures how well v matches the jth class, τ is a temperature parameter that controls the concentration of the distribution Hinton et al. (2015), and v is normalized to ||v|| = 1. The objective maximizes the joint probability ∏n i=1 Pθ(i|fθ(xi)) as
LI = − n∑ i=1 logP (i|fθ(xi)) = − n∑ i log( exp(vTi vi/τ)∑n j=1 exp(v T j vi/τ) ). (2)
Wu et al. (2018) shows that features obtained by minimizing the objective retain similarity between image instances and improve the performance of nearest neighbor classification. For clustering, we note that the parameter τ , which is underexplored in Wu et al. (2018), has a large impact on clustering performance. The effect of τ is discussed later and experimental results are shown in 4.2.1.
3.2 FEATURE DECORRELATION
We define a set of latent feature vectors f and use fl to denote the lth feature vector. Transposition of latent vectors V coincides with {fl}dl=1, where d is the dimensionality of representations. The simple constraint for orthogonal features is,
LFO = ||V V T − I||2 = d∑ l=1 ( (fTl fl − 1)2 + n∑ j=1,j 6=l (fTj fl) 2 ) . (3)
Our novel constraint is based on a softmax formulation of
Q(l|f) = exp(f T l f/τ2)∑d
m=1 exp(f T mf/τ2)
, (4)
Q(l|f) is analogous to P (i|v). Q(l|f) measures how correlated a feature vector is to itself and how dissimilar it is to others. τ2 is the temperature parameter. We formulate the feature decorrelation constraint as
LF = − d∑ l=1 logQ(l|f) = d∑ l=1 ( − fTl fl/τ2 + log d∑ j exp(fTj fl/τ2) ) . (5)
Both constrains in Eq. (3) and Eq. (5) aim to construct independent features. Conventionally, it is preferable for features to be independent to ensure that redundant information is reduced, and orthogonality is a common technique. Compare Eq. (3) and Eq. (5), we can see that minimizing LF and LFO can result in a similar effect, fTl fl → 1 and fTj fl → −1 or 0(l 6= j), and both try to decorrelate latent features.
Our softmax constraint in Eq. (5) shows practical advantages in flexibility and stability. Eq. (3) is called a soft orthogonal constraint, but is still strict enough to force the features to be orthogonal. If d is larger than underlying structures that are hidden and unknown, all features are forcibly orthogonalized and the resultant features may not be appropriate. Softmax formulation allows off-diagonal elements to be non-zero and alleviates the problem of strict orthogonality.
Partial derivatives of LF and LFO with respect to zjl = fTj fl are calculated as ∂LF ∂zjl = − 1τ2 δjl + 1 τ2 exp(zjl/τ2)∑d j exp(zjl/τ2) and ∂LFO∂zjl = −2δjl + 2zjl, where δjl is an indicator function. Since the derivatives
nearly equal zero due to zjl = 1 in the case of j = l, we focus on the case of j 6= l. When j 6= l, the ranges of partial derivatives are 0 ≤ ∂LF∂zjl ≤ 1 τ2
and −2 ≤ ∂LFO∂zjl ≤ 2. The monotonicity of LF can lead to more stable convergence. The advantages of LF are confirmed by experiments in section 4.
3.3 OBJECTIVE FUNCTION AND LEARNING MODEL
Combining instance discrimination and feature decorrelation learning, we formulate our objective function LIDFD as follows:
LIDFD = LI + αLF , (6)
Where α is a weight that balances the contributions of two terms LI and LF .
Figure 1 shows the learning process for the motif of image clustering. Input images X are converted into feature representations V in a lower d-dimensional latent space, via nonlinear mapping with deep neural networks such as ResNet He et al. (2016). The d-dimensional vectors are simultaneously learned through instance discrimination and feature decorrelation. A clustering method, such as classical k-means clustering, is then used on the learned representations to obtain the clustering results.
Optimization can be performed by mini-batch training. To compute the probability P (i|v) in Eq. (1), {vj} is needed for all images. Like Wu et al. (2018); Xiao et al. (2017), we maintain a feature memory bank for storing them. For Q(l|f) in Eq. (4), all {fm} of d dimensions in the current mini-batch can be obtained, we simply calculate the Q(l|f) within the mini-batches. We combine LI and LFO to formulate an alternative loss LIDFO in E.q. (7),
LIDFO = LI + αLFO. (7)
We refer to representation learning using LIDFD, LIDFO, and LI loss as instance discrimination and feature decorrelation (IDFD), instance discrimination and feature orthogonalization (IDFO), and instance discrimination (ID), respectively.
3.4 CONNECTION WITH SPECTRAL CLUSTERING
We explain the connection between IDFD and spectral clustering. We consider a fully connected graph consisting of all representation points, and the similarity matrix W and degree matrix D can be written as Wij = exp(vTi vj/τ) and Dii = ∑n m exp(v T i vm/τ). The loss function of spectral clustering Shaham et al. (2018) can be reformulated as
LSP = (Tr)(fLf) = 1
2 ∑ k n∑ ij wij(f k i − fkj )2 = 1 2 ∑ k n∑ ij exp ( vTi vj τ ) ||vi − vj ||2, (8)
where L is Laplacian matrix, f are feature vectors. Spectral clustering is performed by minimizing LSP subject to orthogonal condition of f , and when LSP takes minimum value f become eigenvectors of Laplacian L. According to Section 3.2, minimizing LF can approximate the orthogonal condition. Under this condition, minimizing LI can approximate the minimizing LSP , which is explained as follows.
According to Eq.(2), minimizing loss LI means maximizing vTi vi and minimizing v T i vj . When i = j, we have ||vi − vj ||2 = 0, LSP becomes zero. We need consider only the influence on LSP from minimizing vTi vj . As v are normalized, LSP can be rewritten using cosine metric as
LSP = n∑ ij exp ( cos θ τ ) sin2 θ 2 , (9)
then ∂LSP∂θ can be calculated as
∂LSP ∂θ = 1 τ sin θ(τ − 1 + cos θ) exp
( cos θ
τ
) . (10)
According to Eq.(10), we get ∂LSP∂θ ≥ 0 when τ ≥ 2. This means LSP monotonically decreases when we minimize vTi vj . Therefore, the impact from minimizing v T i vj is good for minimizing LSP . Even if τ is a little smaller than 2, because τ controls the scale of derivatives and the range of θ where the derivative is negative, large τ decreases the scale and narrows the range, resulting in a small influence on the total loss. From this viewpoint, the effectiveness of minimizing LI using large τ is approximately the same as that of LSP . By adding feature decorrelation constraints, IDFD becomes analogous to spectral clustering.
4 EXPERIMENTS
We conducted experiments using five datasets: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). We adopted ResNet18 He et al. (2016) as the neural network architecture in our main experiments. The same architecture is used for all datasets. Our experimental settings are in accordance with that of Wu et al. (2018). Data augmentation strategies often used on images are also adopted in experiments. Details about datasets and experimental setup are given in Appendix A.
For IDFD, the weight α is simply fixed at 1. Orthogonality constraint weights for IDFO were α = 10 on CIFAR-10 and CIFAR-100, and α = 0.5 on STL-10 and ImageNet subsets. The weight α was set according to the orders of magnitudes of losses. In the main experiments, we set temperature parameter τ = 1 for IDFO and IDFD, and τ2 = 2 for IDFD. In order to fully investigate our work, we also constructed two versions of instance discrimination (ID) that uses only LI loss, ID(original) with small τ = 0.07 and ID(tuned) with large τ = 1.
We compared ID(tuned), IDFO, and IDFD with ID(original) and six other competitive methods, clustering with an autoencoder (AE) Hinton & Salakhutdinov (2006), deep embedded clustering (DEC) Xie et al. (2016), deep adaptive image clustering (DAC) Chang et al. (2017), deep comprehensive correlation mining (DCCM) Wu et al. (2019), invariant information clustering (IIC) Ji et al. (2019), and semantic clustering by adopting nearest neighbors (SCAN) Van Gansbeke et al. (2020) .We use three metrics to measure clustering performance: standard clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). These metrics give values in [0, 1], with higher scores indicating more accurate clustering assignments.
4.1 MAIN RESULTS
Table 1 lists the best performances for each method. The results for the four methods AE, DEC, DAC, and DCCM are cited from Wu et al. (2019), and results for two methods IIC and SCAN are cited from Van Gansbeke et al. (2020). Comparing these results, we conclude that ID(tuned), IDFO, and IDFD, clearly outperform these methods excluding SCAN for all datasets, according to the metrics ACC, NMI, and ARI. For dataset CIFAR-10, ID(tuned), IDFO, and IDFD yielded ACC values of 77.6%, 82.8%, and 81.5%, respectively. For dataset ImageNet-10, ID(tuned), IDFO, and IDFD achieved ACC values of 93.7%, 94.2%, and 95.4%. The high performance is comparable with that of supervised and semi-supervised methods. Gaps between the results of ID(tuned) and those of IDFO and IDFD reflect the effect of the feature constraint term. The performance is improved for all datasets by introducing feature orthogonalization and decorrelation. Impressively, ID(tuned) significantly outperformed ID(original) on all datasets, showing strong impact of temperature parameter. This will be discussed separately in section 4.2.1.
In addition, we note that IDFD differs from SCAN in that IDFD focuses on the representation leaning while SCAN focuses on clustering by given a representation learning. Both SCAN and IDFD demonstrate significant improvement on performance compared with other methods. Results of IDFD and SCAN showed effectiveness of efforts on both representation learning and clustering phases of deep clustering.
We also examine the learning stability of ID(tuned), IDFO, and IDFD. Figure 2 illustrates the accuracy on CIFAR-10 running each of ID(tuned), IDFO, and IDFD. We can see that both IDFO and IDFD obtained higher peak ACC values than ID(tuned). In particular, IDFD yielded higher performance than ID over the entire learning process. IDFO performed better than the other two methods and obtained the highest ACC value in earlier epochs. However, the ACC widely fluctuated
over the learning process and dropped in later epochs. As analyzed in 3.2, our proposed IDFD makes performance higher than ID and more stable than IDFO.
4.2 DISCUSSION
4.2.1 ANALYSIS ON TEMPERATURE PARAMETER
Gaps between results of ID(original) and ID(tuned) in Table 1 show strong impact of temperature parameter. We theoretically and intuitively analyze the essential change caused by the temperature parameter in this subsection.
First, we consider why instance-level discrimination works and under what conditions. Difference in the performance of ID(original) and ID(tuned) suggests optimal distribution in latent space changes with the magnitude of τ . According to empirical investigation and theoretical analysis, we find that a large τ in LI encourages data points to follow a compact distribution when minimizing the loss, while a small τ drives them to follow a uniform distribution. This means minimizing LI with a large τ can reach a good clustering-friendly solution. This property was explained by demonstrating examples and calculation, details are given in Appendix B.
In the definition of P (i|v) in Eq. (1), when τ is small, we compute softmax on larger logits, resulting in higher prediction, and obtain a more confident model. From this viewpoint, we can leverage a small τ to decrease class entanglement if we can learn an accurate class-weight vector. In the general classification problem, since the weight of each class can be learned according to the real labels, it is preferable for models to be more confident. Most works therefore recommend setting a small value, such as τ = 0.07 Wu et al. (2018). In clustering, however, instance-level discrimination is used to learn similarity among samples, with only one sample in each class. Because the model is highly confident, each sample tends to be completely independent from each other. Similarity among samples is seemingly encouraged to approach close to zero, even for samples from the same class. This clearly deviates from the original intent of adopting instance-level discrimination to learn sample entanglements under the condition that each sample can be discriminative. A larger τ than that used for classification is thus needed.
More experiments over different temperature settings on ID and IDFD were conducted on CIFAR-10. Figure 3 shows the accuracy of ID for τ = {0.07, 0.2, 0.5, 0.8, 1, 2, 5, 10}. We calculated the mean and standard deviation of ACC values over the last 500 epochs for each experiment. From the results, we can see that ID can suffer significant performance degradation when τ is too small or too large. This agrees with our analysis above. We also investigate the impact of τ2 by fixing τ = 1. Figure 4 shows the accuracy of the IDFD for τ2 = {0.1, 0.5, 1, 2, 3, 4, 5, 10}. Experimental results show that IDFD is relatively robust to the parameter τ2 and enables stable representation learning.
4.2.2 REPRESENTATION DISTRIBUTION AND FEATURE BEHAVIOR
Figure 5 visualizes the results of representations learned in four experiments: (a) ID(original), (b) ID(tuned), (c) IDFO with τ = 1 and α = 10, and (d) IDFD with τ = 1, τ2 = 2, and α = 1 on CIFAR10. 128-dimension representations were embedded into two dimensions by t-SNE (t-distributed stochastic neighbor embedding) Maaten & Hinton (2008). Colors indicate ground truth classes. The distributions for the ID(original) and ID(tuned) again show the significant difference between
them. Data distribution when τ = 1 is apparently more clustering-friendly than when τ = 0.07. Furthermore, compared with ID(tuned), IDFO and IDFD can separate samples from different classes with certain margins. IDFO tended to construct a patch-like distribution within one class. In contrast, IDFD maintained a tighter connection among samples of the same class and more distinct borders between different classes.
Figure 6 shows distribution of feature representations on ImageNet-10 learned by IDFD. We can see that representations of ImageNet-10 are clustering-friendly and even better than that of CIFAR-10. This is consistent with the results in Table 1 evaluated by metrics ACC, NMI, and ARI. In addition to that, we also plot sample images corresponding to points lying near the border between clusters. We can see that these samples are certainly similar in appearance.
We investigate the effects of orthogonal and decorrelation constraintsLFO andLF . Figure 7 illustrates the feature correlations of ID(tuned), IDFO, and IDFD on dataset CIFAR-10. We see that IDFO clearly decorrelates features and IDFD retains a moderate level of feature correlation between ID and IDFD. Taken together with Figure 2, these results suggest that the softmax formulation of IDFD alleviates the problem of strict orthogonality and enables stable representation learning.
4.2.3 INVESTIGATION FOR PRACTICAL USE
We investigate the dependencies of our method on networks through experiments on other networks: ConvNet Wu et al. (2019), VGG16 Simonyan & Zisserman (2014), and ResNet34 He et al. (2016). Performance was evaluated using the CIFAR-10 dataset. Results listed in Table 2 show that IDFD
can work on various networks. IDFD outperforms ID(tuned), and FD term shows more obvious effect on these networks. We also confirm the effect of cooperation between LI and LF from the viewpoint of spectral clustering, combinations of AE and LF were evaluated in terms of clustering performance. We found that AE cannot benefit from LF as LI did. This result verified that LF has a deep relation with LI , and IDFD is not a simple combination. We also investigate the importance of data augmentation in performance through experiments. Due to the page limit, our extended experiments are given in Appendix C.
5 CONCLUSION
We present a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We analyzed why instance discrimination works for clustering and clarified the conditions. We designed a softmax-formulated feature decorrelation constraint for learning the latent space to realize stable improvement of clustering performance. We also explained the connection between our method and spectral clustering. The proposed representation learning method achieves accuracies comparable to state-of-the-art values on the CIFAR-10 and ImageNet-10 datasets with simple k-means. We also verified IDFD loss works on multiple neural network structures, and our method is expected to be effective for various kinds of problems.
A DATASETS AND EXPERIMENTAL SETUP
Five datasets were used to conduct experiments: CIFAR-10 Krizhevsky et al. (2009), CIFAR100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). Table 3 lists the numbers of images, number of clusters, and image sizes of these datasets. Specifically, the training and testing sets of dataset STL-10 were jointly used in our experiments. Images from the three ImageNet subsets were resized to 96× 96× 3.
We adopted ResNet He et al. (2016) as the neural network architecture in our main experiments. For simplicity, we used ResNet18, which according to our preliminary experiments yields sufficiently high performance. The same architecture was used for all datasets except the input layer. In accordance with the experimental settings of Wu et al. (2018), the dimension of latent feature vectors was set to d = 128, and a stochastic gradient descent optimizer with momentum β = 0.9 was used. The learning rate lr was initialized to 0.03, then gradually scaled down after the first 600 epochs using a coefficient of 0.1 every 350 epochs. The total number of epochs was set to 2000, and the batch size was set to B = 128. Orthogonality constraint weights for IDFO were α = 10 for CIFAR-10 and CIFAR-100 and α = 0.5 for the STL-10 and ImageNet subsets. The weight for IDFO α was set according to the orders of magnitudes of the two losses LI and LFO. For IDFD, the weight α was simply fixed at 1. In the main experiments, we set the default temperature parameter value τ = 1 for ID(tuned), IDFO, and IDFD, and τ2 = 2 for IDFD.
B OPTIMAL SOLUTIONS OF CLUSTERING AND INSTANCE DISCRIMINATION
In Section 4.2.1, we concluded that minimizing LI under the condition that τ is large can reach a clustering-friendly solution. Details about the analysis and calculation was demonstrated by a two-dimensional toy model as follows.
Empirically, we observe that visually similar images tend to get similar assignment probabilities. Similar images can thus be projected to close locations in the latent space. This also motivated ID Wu et al. (2018). In the case of ID, similar images xi and xj yield respective highest probabilities pii and pjj , and also receive relatively high pij and pji values. This property can retain over the process of approximation to the optimal solution. Because instance-level discrimination tries to maximally scatter embedded features of instances over the unit sphere Wu et al. (2018), all representations are thus uniformly spread over the latent space with each representation relatively similar to its surroundings, we call this uniform case. We also consider another case that yields an optimal clustering solution where all samples from the same class are compacted to one point and k clusters are uniformly spread over the space. We call this compact case. Figure 8 shows the representation distributions in the two cases. Because we normalize v, two-dimensional representations form a circle.
In the uniform case, n representations are uniformly located on a circle with an angular interval of θ = 2π/n, and the inner product between two neighboring representations is cos θ. Without loss of generality, we can start with an arbitrary point vi and orderly mark all samples as vi+j . The cosine similarity between vi and vi+j can then be calculated by vTi+jvi = cos jθ. Accordingly, the loss
Figure 8: Two extreme cases of representation distributions over two-dimensional space. Left: uniform. Right: compact.
Figure 9: exp(cos θ/τ) with different τ settings.
contributed by sample i in the uniform case can be calculated as
Liuniform = − log exp(1/τ)∑n−1
m=0 exp(cosmθ/τ) = − log
1 n exp(1/τ)
1 n ∑n−1 m=0 exp(cosmθ/τ) . (11)
Similarly, in the compact case, n/k data from the same class are exactly compacted to a point and k corresponding points located on a circle at an angular interval of θ′ = 2π/k. The inner product between an arbitrary start sample vi and the j-th sample can be calculated as vTi vi+j = cos lθ′, where l = j mod n/k. The probability of assigning i to the cluster with j becomes pij = exp(cos θ′/τ)∑k−1 c=0 n k exp(cos cθ ′/τ) . Accordingly, the loss contributed by sample i in the compact case can be calculated as
Licompact = − log exp(1/τ)∑k−1
c=0 n k exp(cos cθ
′/τ) = − log
1 n exp(1/τ)
1 k ∑k−1 c=0 exp(cos cθ ′/τ) . (12)
Comparing Eq. (11) and (12), we see that the difference between Liuniform and L i compact comes
only from the denominator part of the logarithm. These are two discrete forms of the same integral∫ exp(cos θ/τ)dθ. Clearly, Liuniform equals L i compact when k, n → +∞. We therefore need to consider only the general case where n is sufficiently large and k n. Figure 9 shows a plot of function values exp( cos θτ ) with different τ settings over the domain θ ∈ [0, 2π]. We can see that the curve becomes flatter as τ increases. A flat function f means that for an arbitrary (θ, θ′) pair in its domain of definition, we have f(θ) ≈ f(θ′). In this situation even k n, the difference between the summations of these two discrete functions is not large. Accordingly, we can say Licompact is approximate to L i uniform for a large τ . In other words, minimizing LI can approach the compact situation where same-class samples assemble and differing samples separate. Learning instance-level discrimination for clustering is therefore reasonable.
C EXTENDED EXPERIMENTS
In Section 4.2.3, we have reported some investigations of our method for practical use. Details about several important experiments are supplemented as follows.
C.1 IMPACT OF NETWORK ARCHITECTURE
As Table 2 shows, IDFD can be applied to various networks, and the performance gaps between IDFD and ID(turned) on networks like ConvNet Wu et al. (2019) and VGG16 Simonyan & Zisserman (2014) are more significant than on ResNet He et al. (2016). We added the feature correlation matrix of VGG16 in Figure 10. IDFD on VGG16 obtained sparse correlations similar to the case of ResNet18 in Figure 7, while ID on VGG16 obtained denser and stronger correlations than ResNet18, presumably constructing redundant features that degraded clustering. In the case of VGG16, the feature decorrelation term LF exhibits a larger effect on clustering performance than that of ResNet.
Our proposed losses work on all network architectures, and we expect to introduce the losses to various networks that are suitable for individual problems.
C.2 COMBINATION OF AUTOENCODER AND FEATURE DECORRELATION
In order to further confirm the cooperation effect of instance discrimination and feature decorrelation from the viewpoint of spectral clustering, a combination of autoencoder and feature decorrelation was evaluated in terms of clustering performance. Autoencoder has been verified by datasets such as handwritten digits to be an effective method for deep clustering. In this experiment, we used ConvNet Wu et al. (2019) for the autoencoder architecture and trained it on the CIFAR-10 dataset. We applied k-means to representations learned from autoencoder only and autoencoder combined with feature decorrelation, which are called AE and AEFD, respectively. According to our experiments, the ACC value of AE was 26.0%, and the ACC value of AEFD was 22.4%. Compared to the improvement from ID to IDFD (from 26.8% to 42.0% as shown in Table 2), we see that AE cannot benefit from FD as ID. This result again indicates that FD has a deep relation with ID as we analyzed in Section 3.
C.3 IMPACT OF DATA AUGMENTATION
For reproduction of our results and practical use, we note that data augmentation (DA) has strong impact on the performance. DA is known to have impact on image classification and representation learning. Like in Wu et al. (2018), several generic and accepted techniques, such as cropping and grayscale, were used for data augmenting in this work. The details of the augmentation in the original code can be linked to Wu et al. (2018). In order to investigate the impact of DA, we conducted experiments on five datasets with and without DA and compared their clustering results. Table 4 shows the results. We can see that methods without DA suffered significant performance degradations for clustering, as well as for classification Chen et al. (2020). This reminds us not to ignore the effects of DA in practical use.
To further find out main factors affecting the performance, we also executed experiments by removing each technique used for DA. Take the example of CIFAR-10, techniques used for data augmentation include: ColorJitter, RandomResizedCrop, RandomGrayscale, and RandomHorizontalFlip. All these techniques are generic and easy to be implemented. They have been integrated into general deep learning frameworks such as PyTorch. According to our experimental results as shown in Figure 11, we find that RandomResizedCrop, RandomGrayscale, and ColorJitter have strong effect on image clustering.
For practice, we also applied IDFD to our private images produced by manufacturing process. Generic DA like above were used to these images. IDFD showed good performance on these images according to our experiments. This indicates that our method can be simply applied to practical images. For other types of data such as text and time series, corresponding data augmentation techniques are needed to cooperate with our method. | 1. How does the proposed method improve the efficiency of feature representation for clustering analysis?
2. Can the proposed method integrate the feature representation model parameters and anchor updating steps in an end-to-end fashion?
3. How does the proposed method perform in terms of visualizing the learned latent features, particularly on large-scale datasets like ImageNet-10?
4. Are there any visualization results in the original image space that can demonstrate the effectiveness of the proposed method in mining visually meaningful concepts? | Review | Review
The authors proposed an improved deep-learning-based representation learning method that provides more efficient features for clustering analysis. (1) According to the comparison experiments on several widely used datasets, the integration of a softmax-formulated orthogonal constraint is able to provide more stable latent feature representation. (2) As far as know, the widely-used deep clustering methods used to alternatively optimize the feature representation model parameters and update the anchors that provided by clustering method such as k-means, I am wondering if the proposed method in this study could integrate the two steps in a real end-to-end fashion. (3) I was deeply impressed by the far above state-of-the-art values of evaluation metric of this proposed representation learning method. Although the authors provide some distribution illustrations of latent features on CIFAR-10 dataset, what about the visualization on the ImageNet-10? Besides, adding some 'real' visualization results existing in the original image space rather than the latent space could help to illustrate if the proposed method could mine visually meaningful concepts from the view of visual contents. |
ICLR | Title
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
Abstract
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
1 INTRODUCTION
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. In a fundamental form, autoencoders are used for feature extraction, and classical clustering techniques such as k-means are serially applied to the features. Recent deep clustering techniques integrate learning processes of feature extraction and clustering, yielding high performance for large-scale datasets such as handwritten digits Hu et al. (2017); Shaham et al. (2018); Xie et al. (2016); Tao et al. (2018). However, those methods have fallen short when targets become more complex, as in the case of real-world photograph dataset CIFAR-10 Krizhevsky et al. (2009). Several works report powerful representation learning leads to improvement of clustering performance on complex datasets Chang et al. (2017); Wu et al. (2019). Learning representation is a key challenge to unsupervised clustering.
In order to learn representations for clustering, recent works utilize metric learning which automatically learns similarity functions from data Chang et al. (2017); Wu et al. (2019). They assign pseudo-labels or pseudo-graph to unlabeled data by similarity measures in latent space, and learn discriminative representations to cluster data. These works improve clustering performance on real world images such as CIFAR-10 and ImageNet-10, and indicate the impact of representation learning on clustering. Although features from learned similarity function and pseudo-labels work well for clustering, algorithms still seem to be heuristic; we design a novel algorithm which is based on knowledge from established clustering techniques. In this work, we exploit a core idea of spectral clustering which uses eigenvectors derived from similarities.
Spectral clustering has been theoretically and experimentally investigated, and known to outperform other traditional clustering methods Von Luxburg (2007). The algorithm involves similarity matrix construction, transformation from similarity matrix to Laplacian, and eigendecomposition. Based on
eigenvectors, data points are mapped into a lower dimensional representation which carries information of similarities and is preferable for clustering. We bring this idea of eigenvector representation into deep representation learning.
We design the representation learning with two aims: 1) learning similarities among instances; and 2) reducing correlations within features. The first corresponds to Laplacian, and the second corresponds to feature orthogonality constrains in the spectral clustering algorithm. Learning process integrating both is relevant to eigendecomposition of Laplacian matrix in the spectral clustering.
For the first aim, we adopt the instance discrimination method presented in Wu et al. (2018), where each unlabeled instance is treated as its own distinct class, and discriminative representations are learned to distinguish between individual instance classes. This numerous-class discriminative learning enables learning partial but important features, such as small foreground objects in natural images. Wu et al. (2018) showed that the representation features retain apparent similarity among images and improve the performance of image classification by the nearest neighbor method. We extend their work to the clustering tasks. We clarify their softmax formulation works like similarity matrix in spectral clustering under the condition that temperature parameter τ , which was underexplored in Wu et al. (2018), is set to be a larger value .
For the second aim, we introduce constraints which have the effect of making latent features orthogonal. Orthogonality is often an essential idea in dimension reduction methods such as principal components analysis, and it is preferable for latent features to be independent to ensure that redundant information is reduced. Orthogonality is also essential to a connection between proposed method and spectral clustering, as stated in Section 3.4. In addition to a simple soft orthogonal constraint, we design a novel softmax-formulated decorrelation constraint. Our softmax constraint is "softer" than the soft orthogonal constraint for learning independent feature spaces, but realizes stable improvement of clustering performance.
Finally, we combine instance discrimination and feature decorrelation into learning representation to improve the performance of complex image clustering. For the CIFAR-10 and ImageNet-10 datasets, our method achieves accuracy of 81.5% and 95.4%, respectively. Our PyTorch Paszke et al. (2019) implementation of IDFD is available at https://github.com/TTN-YKK/Clustering_ friendly_representation_learning.
Our main contributions are as follows:
• We propose a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties.
• We adapt deep representation learning by instance discrimination to clustering and clarify the essential properties of the temperature parameter.
• We design a softmax-formulated orthogonal constraint for learning latent features and realize stable improvement of clustering performance.
• Our representation learning method achieves performance comparable to state-of-the-art levels for image clustering tasks with simple k-means.
2 RELATED WORK
Deep clustering methods offer state-of-the-art performance in various fields. Most early deep clustering methods, such as Vincent et al. (2010); Tian et al. (2014), are two-stage methods that apply clustering after learning low-dimensional representations of data in a nonlinear latent space. The autoencoder method proposed in Hinton & Salakhutdinov (2006) is one of the most effective methods for learning representations. Recent works have simultaneously performed representation learning and clustering Song et al. (2013); Xie et al. (2016); Yang et al. (2017); Guo et al. (2017); Tao et al. (2018). Several methods based on generative models have also been proposed Jiang et al. (2016); Dilokthanakul et al. (2016). These methods outperform conventional methods, and sometimes offer performance comparable to that of supervised learning for simple datasets. Deep-learning-based unsupervised image clustering is also being developed Chang et al. (2017); Wu et al. (2019); Ji et al. (2019); Gupta et al. (2020); Van Gansbeke et al. (2020).
Several approaches focus on learning discriminative representations via deep learning. Bojanowski & Joulin (2017) found a mapping between images on a uniformly discretized target space, and enforced their representations to resemble a distribution of pairwise relationships. Caron et al. (2018) applied pseudo-labels to output as supervision by k-means and then trained a deep neural network. Donahue et al. (2016) proposed bidirectional generative adversarial networks for learning generative models that map simple latent distributions to complex real distributions, in order for generators to capture semantic representations. Hjelm et al. (2018) proposed deep infomax to maximize mutual information between the input and output of an encoder. Wu et al. (2018) was motivated by observations in supervised learning that the probabilities of similar image classes become simultaneously high. They showed that discriminating individual instance classes leads to learning representations that retain similarities among data.
IIC Ji et al. (2019) and SCAN Van Gansbeke et al. (2020) are two recent works focusing on image clustering and obtained high performance. IIC Ji et al. (2019) directly learns semantic labels without learning representations based on mutual information between image pairs. SCAN Van Gansbeke et al. (2020) focuses on the clustering phase and largely improved performance based on a given pre-designed representation learning. By contrast, we focus on learning a clusteringfriendly representation space where objects can be simply clustered.
Our method exploits the idea of spectral clustering Shi & Malik (2000); Meila & Shi (2001); Von Luxburg (2007); Ng et al. (2002). From one perspective, spectral clustering finds a low dimensional embedding of data in the eigenspace of the Laplacian matrix, which is derived from pairwise similarities between data. By using the embedded representations, we can proceed to cluster the data by the k-means algorithm in the low-dimensional space. Spectral clustering often outperforms earlier algorithms such as k-means once pair similarities are properly calculated. Shaham et al. (2018) incorporated the concept of spectral clustering into deep a neural network structure. Similarities were calculated by learning a Siamese net Shaham & Lederman (2018) where the input positive and negative pairs were constructed according to the Euclidean distance.
3 PROPOSED METHOD
Given an unlabeled dataset X = {xi}ni=1 and a predefined number of clusters k, where xi denotes the ith sample, we perform the clustering task in two phases, namely, representation learning and clustering. This work focuses on the first phase, which aims to learn an embedding function v = fθ(x) mapping data x to representation v so that v is preferable for clustering. fθ is modeled as a deep neural network with parameter θ. We use V = {vi}ni=1 to denote the whole representation set.
3.1 INSTANCE DISCRIMINATION
We apply the instance discrimination method proposed by Wu et al. (2018) to learn clustering-friendly representations that capture similarity between instances. The objective function is formulated based on the softmax criterion. Each instance is assumed to represent a distinct class. For given data x1, . . . , xn, the corresponding representations are v1, . . . ,vn, and data xi is classified into the ith class. Accordingly, the weight vector for the ith class can be approximated by a vector vi. The probability of representation v being assigned into the ith class is
P (i|v) = exp(v T i v/τ)∑n
j=1 exp(v T j v/τ)
, (1)
where vTj v measures how well v matches the jth class, τ is a temperature parameter that controls the concentration of the distribution Hinton et al. (2015), and v is normalized to ||v|| = 1. The objective maximizes the joint probability ∏n i=1 Pθ(i|fθ(xi)) as
LI = − n∑ i=1 logP (i|fθ(xi)) = − n∑ i log( exp(vTi vi/τ)∑n j=1 exp(v T j vi/τ) ). (2)
Wu et al. (2018) shows that features obtained by minimizing the objective retain similarity between image instances and improve the performance of nearest neighbor classification. For clustering, we note that the parameter τ , which is underexplored in Wu et al. (2018), has a large impact on clustering performance. The effect of τ is discussed later and experimental results are shown in 4.2.1.
3.2 FEATURE DECORRELATION
We define a set of latent feature vectors f and use fl to denote the lth feature vector. Transposition of latent vectors V coincides with {fl}dl=1, where d is the dimensionality of representations. The simple constraint for orthogonal features is,
LFO = ||V V T − I||2 = d∑ l=1 ( (fTl fl − 1)2 + n∑ j=1,j 6=l (fTj fl) 2 ) . (3)
Our novel constraint is based on a softmax formulation of
Q(l|f) = exp(f T l f/τ2)∑d
m=1 exp(f T mf/τ2)
, (4)
Q(l|f) is analogous to P (i|v). Q(l|f) measures how correlated a feature vector is to itself and how dissimilar it is to others. τ2 is the temperature parameter. We formulate the feature decorrelation constraint as
LF = − d∑ l=1 logQ(l|f) = d∑ l=1 ( − fTl fl/τ2 + log d∑ j exp(fTj fl/τ2) ) . (5)
Both constrains in Eq. (3) and Eq. (5) aim to construct independent features. Conventionally, it is preferable for features to be independent to ensure that redundant information is reduced, and orthogonality is a common technique. Compare Eq. (3) and Eq. (5), we can see that minimizing LF and LFO can result in a similar effect, fTl fl → 1 and fTj fl → −1 or 0(l 6= j), and both try to decorrelate latent features.
Our softmax constraint in Eq. (5) shows practical advantages in flexibility and stability. Eq. (3) is called a soft orthogonal constraint, but is still strict enough to force the features to be orthogonal. If d is larger than underlying structures that are hidden and unknown, all features are forcibly orthogonalized and the resultant features may not be appropriate. Softmax formulation allows off-diagonal elements to be non-zero and alleviates the problem of strict orthogonality.
Partial derivatives of LF and LFO with respect to zjl = fTj fl are calculated as ∂LF ∂zjl = − 1τ2 δjl + 1 τ2 exp(zjl/τ2)∑d j exp(zjl/τ2) and ∂LFO∂zjl = −2δjl + 2zjl, where δjl is an indicator function. Since the derivatives
nearly equal zero due to zjl = 1 in the case of j = l, we focus on the case of j 6= l. When j 6= l, the ranges of partial derivatives are 0 ≤ ∂LF∂zjl ≤ 1 τ2
and −2 ≤ ∂LFO∂zjl ≤ 2. The monotonicity of LF can lead to more stable convergence. The advantages of LF are confirmed by experiments in section 4.
3.3 OBJECTIVE FUNCTION AND LEARNING MODEL
Combining instance discrimination and feature decorrelation learning, we formulate our objective function LIDFD as follows:
LIDFD = LI + αLF , (6)
Where α is a weight that balances the contributions of two terms LI and LF .
Figure 1 shows the learning process for the motif of image clustering. Input images X are converted into feature representations V in a lower d-dimensional latent space, via nonlinear mapping with deep neural networks such as ResNet He et al. (2016). The d-dimensional vectors are simultaneously learned through instance discrimination and feature decorrelation. A clustering method, such as classical k-means clustering, is then used on the learned representations to obtain the clustering results.
Optimization can be performed by mini-batch training. To compute the probability P (i|v) in Eq. (1), {vj} is needed for all images. Like Wu et al. (2018); Xiao et al. (2017), we maintain a feature memory bank for storing them. For Q(l|f) in Eq. (4), all {fm} of d dimensions in the current mini-batch can be obtained, we simply calculate the Q(l|f) within the mini-batches. We combine LI and LFO to formulate an alternative loss LIDFO in E.q. (7),
LIDFO = LI + αLFO. (7)
We refer to representation learning using LIDFD, LIDFO, and LI loss as instance discrimination and feature decorrelation (IDFD), instance discrimination and feature orthogonalization (IDFO), and instance discrimination (ID), respectively.
3.4 CONNECTION WITH SPECTRAL CLUSTERING
We explain the connection between IDFD and spectral clustering. We consider a fully connected graph consisting of all representation points, and the similarity matrix W and degree matrix D can be written as Wij = exp(vTi vj/τ) and Dii = ∑n m exp(v T i vm/τ). The loss function of spectral clustering Shaham et al. (2018) can be reformulated as
LSP = (Tr)(fLf) = 1
2 ∑ k n∑ ij wij(f k i − fkj )2 = 1 2 ∑ k n∑ ij exp ( vTi vj τ ) ||vi − vj ||2, (8)
where L is Laplacian matrix, f are feature vectors. Spectral clustering is performed by minimizing LSP subject to orthogonal condition of f , and when LSP takes minimum value f become eigenvectors of Laplacian L. According to Section 3.2, minimizing LF can approximate the orthogonal condition. Under this condition, minimizing LI can approximate the minimizing LSP , which is explained as follows.
According to Eq.(2), minimizing loss LI means maximizing vTi vi and minimizing v T i vj . When i = j, we have ||vi − vj ||2 = 0, LSP becomes zero. We need consider only the influence on LSP from minimizing vTi vj . As v are normalized, LSP can be rewritten using cosine metric as
LSP = n∑ ij exp ( cos θ τ ) sin2 θ 2 , (9)
then ∂LSP∂θ can be calculated as
∂LSP ∂θ = 1 τ sin θ(τ − 1 + cos θ) exp
( cos θ
τ
) . (10)
According to Eq.(10), we get ∂LSP∂θ ≥ 0 when τ ≥ 2. This means LSP monotonically decreases when we minimize vTi vj . Therefore, the impact from minimizing v T i vj is good for minimizing LSP . Even if τ is a little smaller than 2, because τ controls the scale of derivatives and the range of θ where the derivative is negative, large τ decreases the scale and narrows the range, resulting in a small influence on the total loss. From this viewpoint, the effectiveness of minimizing LI using large τ is approximately the same as that of LSP . By adding feature decorrelation constraints, IDFD becomes analogous to spectral clustering.
4 EXPERIMENTS
We conducted experiments using five datasets: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). We adopted ResNet18 He et al. (2016) as the neural network architecture in our main experiments. The same architecture is used for all datasets. Our experimental settings are in accordance with that of Wu et al. (2018). Data augmentation strategies often used on images are also adopted in experiments. Details about datasets and experimental setup are given in Appendix A.
For IDFD, the weight α is simply fixed at 1. Orthogonality constraint weights for IDFO were α = 10 on CIFAR-10 and CIFAR-100, and α = 0.5 on STL-10 and ImageNet subsets. The weight α was set according to the orders of magnitudes of losses. In the main experiments, we set temperature parameter τ = 1 for IDFO and IDFD, and τ2 = 2 for IDFD. In order to fully investigate our work, we also constructed two versions of instance discrimination (ID) that uses only LI loss, ID(original) with small τ = 0.07 and ID(tuned) with large τ = 1.
We compared ID(tuned), IDFO, and IDFD with ID(original) and six other competitive methods, clustering with an autoencoder (AE) Hinton & Salakhutdinov (2006), deep embedded clustering (DEC) Xie et al. (2016), deep adaptive image clustering (DAC) Chang et al. (2017), deep comprehensive correlation mining (DCCM) Wu et al. (2019), invariant information clustering (IIC) Ji et al. (2019), and semantic clustering by adopting nearest neighbors (SCAN) Van Gansbeke et al. (2020) .We use three metrics to measure clustering performance: standard clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). These metrics give values in [0, 1], with higher scores indicating more accurate clustering assignments.
4.1 MAIN RESULTS
Table 1 lists the best performances for each method. The results for the four methods AE, DEC, DAC, and DCCM are cited from Wu et al. (2019), and results for two methods IIC and SCAN are cited from Van Gansbeke et al. (2020). Comparing these results, we conclude that ID(tuned), IDFO, and IDFD, clearly outperform these methods excluding SCAN for all datasets, according to the metrics ACC, NMI, and ARI. For dataset CIFAR-10, ID(tuned), IDFO, and IDFD yielded ACC values of 77.6%, 82.8%, and 81.5%, respectively. For dataset ImageNet-10, ID(tuned), IDFO, and IDFD achieved ACC values of 93.7%, 94.2%, and 95.4%. The high performance is comparable with that of supervised and semi-supervised methods. Gaps between the results of ID(tuned) and those of IDFO and IDFD reflect the effect of the feature constraint term. The performance is improved for all datasets by introducing feature orthogonalization and decorrelation. Impressively, ID(tuned) significantly outperformed ID(original) on all datasets, showing strong impact of temperature parameter. This will be discussed separately in section 4.2.1.
In addition, we note that IDFD differs from SCAN in that IDFD focuses on the representation leaning while SCAN focuses on clustering by given a representation learning. Both SCAN and IDFD demonstrate significant improvement on performance compared with other methods. Results of IDFD and SCAN showed effectiveness of efforts on both representation learning and clustering phases of deep clustering.
We also examine the learning stability of ID(tuned), IDFO, and IDFD. Figure 2 illustrates the accuracy on CIFAR-10 running each of ID(tuned), IDFO, and IDFD. We can see that both IDFO and IDFD obtained higher peak ACC values than ID(tuned). In particular, IDFD yielded higher performance than ID over the entire learning process. IDFO performed better than the other two methods and obtained the highest ACC value in earlier epochs. However, the ACC widely fluctuated
over the learning process and dropped in later epochs. As analyzed in 3.2, our proposed IDFD makes performance higher than ID and more stable than IDFO.
4.2 DISCUSSION
4.2.1 ANALYSIS ON TEMPERATURE PARAMETER
Gaps between results of ID(original) and ID(tuned) in Table 1 show strong impact of temperature parameter. We theoretically and intuitively analyze the essential change caused by the temperature parameter in this subsection.
First, we consider why instance-level discrimination works and under what conditions. Difference in the performance of ID(original) and ID(tuned) suggests optimal distribution in latent space changes with the magnitude of τ . According to empirical investigation and theoretical analysis, we find that a large τ in LI encourages data points to follow a compact distribution when minimizing the loss, while a small τ drives them to follow a uniform distribution. This means minimizing LI with a large τ can reach a good clustering-friendly solution. This property was explained by demonstrating examples and calculation, details are given in Appendix B.
In the definition of P (i|v) in Eq. (1), when τ is small, we compute softmax on larger logits, resulting in higher prediction, and obtain a more confident model. From this viewpoint, we can leverage a small τ to decrease class entanglement if we can learn an accurate class-weight vector. In the general classification problem, since the weight of each class can be learned according to the real labels, it is preferable for models to be more confident. Most works therefore recommend setting a small value, such as τ = 0.07 Wu et al. (2018). In clustering, however, instance-level discrimination is used to learn similarity among samples, with only one sample in each class. Because the model is highly confident, each sample tends to be completely independent from each other. Similarity among samples is seemingly encouraged to approach close to zero, even for samples from the same class. This clearly deviates from the original intent of adopting instance-level discrimination to learn sample entanglements under the condition that each sample can be discriminative. A larger τ than that used for classification is thus needed.
More experiments over different temperature settings on ID and IDFD were conducted on CIFAR-10. Figure 3 shows the accuracy of ID for τ = {0.07, 0.2, 0.5, 0.8, 1, 2, 5, 10}. We calculated the mean and standard deviation of ACC values over the last 500 epochs for each experiment. From the results, we can see that ID can suffer significant performance degradation when τ is too small or too large. This agrees with our analysis above. We also investigate the impact of τ2 by fixing τ = 1. Figure 4 shows the accuracy of the IDFD for τ2 = {0.1, 0.5, 1, 2, 3, 4, 5, 10}. Experimental results show that IDFD is relatively robust to the parameter τ2 and enables stable representation learning.
4.2.2 REPRESENTATION DISTRIBUTION AND FEATURE BEHAVIOR
Figure 5 visualizes the results of representations learned in four experiments: (a) ID(original), (b) ID(tuned), (c) IDFO with τ = 1 and α = 10, and (d) IDFD with τ = 1, τ2 = 2, and α = 1 on CIFAR10. 128-dimension representations were embedded into two dimensions by t-SNE (t-distributed stochastic neighbor embedding) Maaten & Hinton (2008). Colors indicate ground truth classes. The distributions for the ID(original) and ID(tuned) again show the significant difference between
them. Data distribution when τ = 1 is apparently more clustering-friendly than when τ = 0.07. Furthermore, compared with ID(tuned), IDFO and IDFD can separate samples from different classes with certain margins. IDFO tended to construct a patch-like distribution within one class. In contrast, IDFD maintained a tighter connection among samples of the same class and more distinct borders between different classes.
Figure 6 shows distribution of feature representations on ImageNet-10 learned by IDFD. We can see that representations of ImageNet-10 are clustering-friendly and even better than that of CIFAR-10. This is consistent with the results in Table 1 evaluated by metrics ACC, NMI, and ARI. In addition to that, we also plot sample images corresponding to points lying near the border between clusters. We can see that these samples are certainly similar in appearance.
We investigate the effects of orthogonal and decorrelation constraintsLFO andLF . Figure 7 illustrates the feature correlations of ID(tuned), IDFO, and IDFD on dataset CIFAR-10. We see that IDFO clearly decorrelates features and IDFD retains a moderate level of feature correlation between ID and IDFD. Taken together with Figure 2, these results suggest that the softmax formulation of IDFD alleviates the problem of strict orthogonality and enables stable representation learning.
4.2.3 INVESTIGATION FOR PRACTICAL USE
We investigate the dependencies of our method on networks through experiments on other networks: ConvNet Wu et al. (2019), VGG16 Simonyan & Zisserman (2014), and ResNet34 He et al. (2016). Performance was evaluated using the CIFAR-10 dataset. Results listed in Table 2 show that IDFD
can work on various networks. IDFD outperforms ID(tuned), and FD term shows more obvious effect on these networks. We also confirm the effect of cooperation between LI and LF from the viewpoint of spectral clustering, combinations of AE and LF were evaluated in terms of clustering performance. We found that AE cannot benefit from LF as LI did. This result verified that LF has a deep relation with LI , and IDFD is not a simple combination. We also investigate the importance of data augmentation in performance through experiments. Due to the page limit, our extended experiments are given in Appendix C.
5 CONCLUSION
We present a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We analyzed why instance discrimination works for clustering and clarified the conditions. We designed a softmax-formulated feature decorrelation constraint for learning the latent space to realize stable improvement of clustering performance. We also explained the connection between our method and spectral clustering. The proposed representation learning method achieves accuracies comparable to state-of-the-art values on the CIFAR-10 and ImageNet-10 datasets with simple k-means. We also verified IDFD loss works on multiple neural network structures, and our method is expected to be effective for various kinds of problems.
A DATASETS AND EXPERIMENTAL SETUP
Five datasets were used to conduct experiments: CIFAR-10 Krizhevsky et al. (2009), CIFAR100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). Table 3 lists the numbers of images, number of clusters, and image sizes of these datasets. Specifically, the training and testing sets of dataset STL-10 were jointly used in our experiments. Images from the three ImageNet subsets were resized to 96× 96× 3.
We adopted ResNet He et al. (2016) as the neural network architecture in our main experiments. For simplicity, we used ResNet18, which according to our preliminary experiments yields sufficiently high performance. The same architecture was used for all datasets except the input layer. In accordance with the experimental settings of Wu et al. (2018), the dimension of latent feature vectors was set to d = 128, and a stochastic gradient descent optimizer with momentum β = 0.9 was used. The learning rate lr was initialized to 0.03, then gradually scaled down after the first 600 epochs using a coefficient of 0.1 every 350 epochs. The total number of epochs was set to 2000, and the batch size was set to B = 128. Orthogonality constraint weights for IDFO were α = 10 for CIFAR-10 and CIFAR-100 and α = 0.5 for the STL-10 and ImageNet subsets. The weight for IDFO α was set according to the orders of magnitudes of the two losses LI and LFO. For IDFD, the weight α was simply fixed at 1. In the main experiments, we set the default temperature parameter value τ = 1 for ID(tuned), IDFO, and IDFD, and τ2 = 2 for IDFD.
B OPTIMAL SOLUTIONS OF CLUSTERING AND INSTANCE DISCRIMINATION
In Section 4.2.1, we concluded that minimizing LI under the condition that τ is large can reach a clustering-friendly solution. Details about the analysis and calculation was demonstrated by a two-dimensional toy model as follows.
Empirically, we observe that visually similar images tend to get similar assignment probabilities. Similar images can thus be projected to close locations in the latent space. This also motivated ID Wu et al. (2018). In the case of ID, similar images xi and xj yield respective highest probabilities pii and pjj , and also receive relatively high pij and pji values. This property can retain over the process of approximation to the optimal solution. Because instance-level discrimination tries to maximally scatter embedded features of instances over the unit sphere Wu et al. (2018), all representations are thus uniformly spread over the latent space with each representation relatively similar to its surroundings, we call this uniform case. We also consider another case that yields an optimal clustering solution where all samples from the same class are compacted to one point and k clusters are uniformly spread over the space. We call this compact case. Figure 8 shows the representation distributions in the two cases. Because we normalize v, two-dimensional representations form a circle.
In the uniform case, n representations are uniformly located on a circle with an angular interval of θ = 2π/n, and the inner product between two neighboring representations is cos θ. Without loss of generality, we can start with an arbitrary point vi and orderly mark all samples as vi+j . The cosine similarity between vi and vi+j can then be calculated by vTi+jvi = cos jθ. Accordingly, the loss
Figure 8: Two extreme cases of representation distributions over two-dimensional space. Left: uniform. Right: compact.
Figure 9: exp(cos θ/τ) with different τ settings.
contributed by sample i in the uniform case can be calculated as
Liuniform = − log exp(1/τ)∑n−1
m=0 exp(cosmθ/τ) = − log
1 n exp(1/τ)
1 n ∑n−1 m=0 exp(cosmθ/τ) . (11)
Similarly, in the compact case, n/k data from the same class are exactly compacted to a point and k corresponding points located on a circle at an angular interval of θ′ = 2π/k. The inner product between an arbitrary start sample vi and the j-th sample can be calculated as vTi vi+j = cos lθ′, where l = j mod n/k. The probability of assigning i to the cluster with j becomes pij = exp(cos θ′/τ)∑k−1 c=0 n k exp(cos cθ ′/τ) . Accordingly, the loss contributed by sample i in the compact case can be calculated as
Licompact = − log exp(1/τ)∑k−1
c=0 n k exp(cos cθ
′/τ) = − log
1 n exp(1/τ)
1 k ∑k−1 c=0 exp(cos cθ ′/τ) . (12)
Comparing Eq. (11) and (12), we see that the difference between Liuniform and L i compact comes
only from the denominator part of the logarithm. These are two discrete forms of the same integral∫ exp(cos θ/τ)dθ. Clearly, Liuniform equals L i compact when k, n → +∞. We therefore need to consider only the general case where n is sufficiently large and k n. Figure 9 shows a plot of function values exp( cos θτ ) with different τ settings over the domain θ ∈ [0, 2π]. We can see that the curve becomes flatter as τ increases. A flat function f means that for an arbitrary (θ, θ′) pair in its domain of definition, we have f(θ) ≈ f(θ′). In this situation even k n, the difference between the summations of these two discrete functions is not large. Accordingly, we can say Licompact is approximate to L i uniform for a large τ . In other words, minimizing LI can approach the compact situation where same-class samples assemble and differing samples separate. Learning instance-level discrimination for clustering is therefore reasonable.
C EXTENDED EXPERIMENTS
In Section 4.2.3, we have reported some investigations of our method for practical use. Details about several important experiments are supplemented as follows.
C.1 IMPACT OF NETWORK ARCHITECTURE
As Table 2 shows, IDFD can be applied to various networks, and the performance gaps between IDFD and ID(turned) on networks like ConvNet Wu et al. (2019) and VGG16 Simonyan & Zisserman (2014) are more significant than on ResNet He et al. (2016). We added the feature correlation matrix of VGG16 in Figure 10. IDFD on VGG16 obtained sparse correlations similar to the case of ResNet18 in Figure 7, while ID on VGG16 obtained denser and stronger correlations than ResNet18, presumably constructing redundant features that degraded clustering. In the case of VGG16, the feature decorrelation term LF exhibits a larger effect on clustering performance than that of ResNet.
Our proposed losses work on all network architectures, and we expect to introduce the losses to various networks that are suitable for individual problems.
C.2 COMBINATION OF AUTOENCODER AND FEATURE DECORRELATION
In order to further confirm the cooperation effect of instance discrimination and feature decorrelation from the viewpoint of spectral clustering, a combination of autoencoder and feature decorrelation was evaluated in terms of clustering performance. Autoencoder has been verified by datasets such as handwritten digits to be an effective method for deep clustering. In this experiment, we used ConvNet Wu et al. (2019) for the autoencoder architecture and trained it on the CIFAR-10 dataset. We applied k-means to representations learned from autoencoder only and autoencoder combined with feature decorrelation, which are called AE and AEFD, respectively. According to our experiments, the ACC value of AE was 26.0%, and the ACC value of AEFD was 22.4%. Compared to the improvement from ID to IDFD (from 26.8% to 42.0% as shown in Table 2), we see that AE cannot benefit from FD as ID. This result again indicates that FD has a deep relation with ID as we analyzed in Section 3.
C.3 IMPACT OF DATA AUGMENTATION
For reproduction of our results and practical use, we note that data augmentation (DA) has strong impact on the performance. DA is known to have impact on image classification and representation learning. Like in Wu et al. (2018), several generic and accepted techniques, such as cropping and grayscale, were used for data augmenting in this work. The details of the augmentation in the original code can be linked to Wu et al. (2018). In order to investigate the impact of DA, we conducted experiments on five datasets with and without DA and compared their clustering results. Table 4 shows the results. We can see that methods without DA suffered significant performance degradations for clustering, as well as for classification Chen et al. (2020). This reminds us not to ignore the effects of DA in practical use.
To further find out main factors affecting the performance, we also executed experiments by removing each technique used for DA. Take the example of CIFAR-10, techniques used for data augmentation include: ColorJitter, RandomResizedCrop, RandomGrayscale, and RandomHorizontalFlip. All these techniques are generic and easy to be implemented. They have been integrated into general deep learning frameworks such as PyTorch. According to our experimental results as shown in Figure 11, we find that RandomResizedCrop, RandomGrayscale, and ColorJitter have strong effect on image clustering.
For practice, we also applied IDFD to our private images produced by manufacturing process. Generic DA like above were used to these images. IDFD showed good performance on these images according to our experiments. This indicates that our method can be simply applied to practical images. For other types of data such as text and time series, corresponding data augmentation techniques are needed to cooperate with our method. | 1. What are the key contributions and novel aspects introduced by the paper in clustering-friendly representation learning?
2. What are the strengths of the proposed approach, particularly in combining instance discrimination and feature decorrelation losses?
3. Do you have any concerns or questions regarding the two loss terms in Equation (6) and their contributions to the method's performance?
4. Can you provide more insights into the motivation behind Equation (3), specifically the reasoning behind the equality between the second and third expressions? | Review | Review
This paper proposes a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Instance discrimination loss and feature decorrelation loss are combined to optimize the network. The paper is well qritten and experimental results are good. I have some questions about this paper:
There is no ablation analysis about the two loss terms in Eq.(6). What about the contributions of the two loss terms?
What is the motivation of Eq.(3)? I.e., why the "=" holds between the second and third expressions? |
ICLR | Title
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
Abstract
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
1 INTRODUCTION
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. In a fundamental form, autoencoders are used for feature extraction, and classical clustering techniques such as k-means are serially applied to the features. Recent deep clustering techniques integrate learning processes of feature extraction and clustering, yielding high performance for large-scale datasets such as handwritten digits Hu et al. (2017); Shaham et al. (2018); Xie et al. (2016); Tao et al. (2018). However, those methods have fallen short when targets become more complex, as in the case of real-world photograph dataset CIFAR-10 Krizhevsky et al. (2009). Several works report powerful representation learning leads to improvement of clustering performance on complex datasets Chang et al. (2017); Wu et al. (2019). Learning representation is a key challenge to unsupervised clustering.
In order to learn representations for clustering, recent works utilize metric learning which automatically learns similarity functions from data Chang et al. (2017); Wu et al. (2019). They assign pseudo-labels or pseudo-graph to unlabeled data by similarity measures in latent space, and learn discriminative representations to cluster data. These works improve clustering performance on real world images such as CIFAR-10 and ImageNet-10, and indicate the impact of representation learning on clustering. Although features from learned similarity function and pseudo-labels work well for clustering, algorithms still seem to be heuristic; we design a novel algorithm which is based on knowledge from established clustering techniques. In this work, we exploit a core idea of spectral clustering which uses eigenvectors derived from similarities.
Spectral clustering has been theoretically and experimentally investigated, and known to outperform other traditional clustering methods Von Luxburg (2007). The algorithm involves similarity matrix construction, transformation from similarity matrix to Laplacian, and eigendecomposition. Based on
eigenvectors, data points are mapped into a lower dimensional representation which carries information of similarities and is preferable for clustering. We bring this idea of eigenvector representation into deep representation learning.
We design the representation learning with two aims: 1) learning similarities among instances; and 2) reducing correlations within features. The first corresponds to Laplacian, and the second corresponds to feature orthogonality constrains in the spectral clustering algorithm. Learning process integrating both is relevant to eigendecomposition of Laplacian matrix in the spectral clustering.
For the first aim, we adopt the instance discrimination method presented in Wu et al. (2018), where each unlabeled instance is treated as its own distinct class, and discriminative representations are learned to distinguish between individual instance classes. This numerous-class discriminative learning enables learning partial but important features, such as small foreground objects in natural images. Wu et al. (2018) showed that the representation features retain apparent similarity among images and improve the performance of image classification by the nearest neighbor method. We extend their work to the clustering tasks. We clarify their softmax formulation works like similarity matrix in spectral clustering under the condition that temperature parameter τ , which was underexplored in Wu et al. (2018), is set to be a larger value .
For the second aim, we introduce constraints which have the effect of making latent features orthogonal. Orthogonality is often an essential idea in dimension reduction methods such as principal components analysis, and it is preferable for latent features to be independent to ensure that redundant information is reduced. Orthogonality is also essential to a connection between proposed method and spectral clustering, as stated in Section 3.4. In addition to a simple soft orthogonal constraint, we design a novel softmax-formulated decorrelation constraint. Our softmax constraint is "softer" than the soft orthogonal constraint for learning independent feature spaces, but realizes stable improvement of clustering performance.
Finally, we combine instance discrimination and feature decorrelation into learning representation to improve the performance of complex image clustering. For the CIFAR-10 and ImageNet-10 datasets, our method achieves accuracy of 81.5% and 95.4%, respectively. Our PyTorch Paszke et al. (2019) implementation of IDFD is available at https://github.com/TTN-YKK/Clustering_ friendly_representation_learning.
Our main contributions are as follows:
• We propose a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties.
• We adapt deep representation learning by instance discrimination to clustering and clarify the essential properties of the temperature parameter.
• We design a softmax-formulated orthogonal constraint for learning latent features and realize stable improvement of clustering performance.
• Our representation learning method achieves performance comparable to state-of-the-art levels for image clustering tasks with simple k-means.
2 RELATED WORK
Deep clustering methods offer state-of-the-art performance in various fields. Most early deep clustering methods, such as Vincent et al. (2010); Tian et al. (2014), are two-stage methods that apply clustering after learning low-dimensional representations of data in a nonlinear latent space. The autoencoder method proposed in Hinton & Salakhutdinov (2006) is one of the most effective methods for learning representations. Recent works have simultaneously performed representation learning and clustering Song et al. (2013); Xie et al. (2016); Yang et al. (2017); Guo et al. (2017); Tao et al. (2018). Several methods based on generative models have also been proposed Jiang et al. (2016); Dilokthanakul et al. (2016). These methods outperform conventional methods, and sometimes offer performance comparable to that of supervised learning for simple datasets. Deep-learning-based unsupervised image clustering is also being developed Chang et al. (2017); Wu et al. (2019); Ji et al. (2019); Gupta et al. (2020); Van Gansbeke et al. (2020).
Several approaches focus on learning discriminative representations via deep learning. Bojanowski & Joulin (2017) found a mapping between images on a uniformly discretized target space, and enforced their representations to resemble a distribution of pairwise relationships. Caron et al. (2018) applied pseudo-labels to output as supervision by k-means and then trained a deep neural network. Donahue et al. (2016) proposed bidirectional generative adversarial networks for learning generative models that map simple latent distributions to complex real distributions, in order for generators to capture semantic representations. Hjelm et al. (2018) proposed deep infomax to maximize mutual information between the input and output of an encoder. Wu et al. (2018) was motivated by observations in supervised learning that the probabilities of similar image classes become simultaneously high. They showed that discriminating individual instance classes leads to learning representations that retain similarities among data.
IIC Ji et al. (2019) and SCAN Van Gansbeke et al. (2020) are two recent works focusing on image clustering and obtained high performance. IIC Ji et al. (2019) directly learns semantic labels without learning representations based on mutual information between image pairs. SCAN Van Gansbeke et al. (2020) focuses on the clustering phase and largely improved performance based on a given pre-designed representation learning. By contrast, we focus on learning a clusteringfriendly representation space where objects can be simply clustered.
Our method exploits the idea of spectral clustering Shi & Malik (2000); Meila & Shi (2001); Von Luxburg (2007); Ng et al. (2002). From one perspective, spectral clustering finds a low dimensional embedding of data in the eigenspace of the Laplacian matrix, which is derived from pairwise similarities between data. By using the embedded representations, we can proceed to cluster the data by the k-means algorithm in the low-dimensional space. Spectral clustering often outperforms earlier algorithms such as k-means once pair similarities are properly calculated. Shaham et al. (2018) incorporated the concept of spectral clustering into deep a neural network structure. Similarities were calculated by learning a Siamese net Shaham & Lederman (2018) where the input positive and negative pairs were constructed according to the Euclidean distance.
3 PROPOSED METHOD
Given an unlabeled dataset X = {xi}ni=1 and a predefined number of clusters k, where xi denotes the ith sample, we perform the clustering task in two phases, namely, representation learning and clustering. This work focuses on the first phase, which aims to learn an embedding function v = fθ(x) mapping data x to representation v so that v is preferable for clustering. fθ is modeled as a deep neural network with parameter θ. We use V = {vi}ni=1 to denote the whole representation set.
3.1 INSTANCE DISCRIMINATION
We apply the instance discrimination method proposed by Wu et al. (2018) to learn clustering-friendly representations that capture similarity between instances. The objective function is formulated based on the softmax criterion. Each instance is assumed to represent a distinct class. For given data x1, . . . , xn, the corresponding representations are v1, . . . ,vn, and data xi is classified into the ith class. Accordingly, the weight vector for the ith class can be approximated by a vector vi. The probability of representation v being assigned into the ith class is
P (i|v) = exp(v T i v/τ)∑n
j=1 exp(v T j v/τ)
, (1)
where vTj v measures how well v matches the jth class, τ is a temperature parameter that controls the concentration of the distribution Hinton et al. (2015), and v is normalized to ||v|| = 1. The objective maximizes the joint probability ∏n i=1 Pθ(i|fθ(xi)) as
LI = − n∑ i=1 logP (i|fθ(xi)) = − n∑ i log( exp(vTi vi/τ)∑n j=1 exp(v T j vi/τ) ). (2)
Wu et al. (2018) shows that features obtained by minimizing the objective retain similarity between image instances and improve the performance of nearest neighbor classification. For clustering, we note that the parameter τ , which is underexplored in Wu et al. (2018), has a large impact on clustering performance. The effect of τ is discussed later and experimental results are shown in 4.2.1.
3.2 FEATURE DECORRELATION
We define a set of latent feature vectors f and use fl to denote the lth feature vector. Transposition of latent vectors V coincides with {fl}dl=1, where d is the dimensionality of representations. The simple constraint for orthogonal features is,
LFO = ||V V T − I||2 = d∑ l=1 ( (fTl fl − 1)2 + n∑ j=1,j 6=l (fTj fl) 2 ) . (3)
Our novel constraint is based on a softmax formulation of
Q(l|f) = exp(f T l f/τ2)∑d
m=1 exp(f T mf/τ2)
, (4)
Q(l|f) is analogous to P (i|v). Q(l|f) measures how correlated a feature vector is to itself and how dissimilar it is to others. τ2 is the temperature parameter. We formulate the feature decorrelation constraint as
LF = − d∑ l=1 logQ(l|f) = d∑ l=1 ( − fTl fl/τ2 + log d∑ j exp(fTj fl/τ2) ) . (5)
Both constrains in Eq. (3) and Eq. (5) aim to construct independent features. Conventionally, it is preferable for features to be independent to ensure that redundant information is reduced, and orthogonality is a common technique. Compare Eq. (3) and Eq. (5), we can see that minimizing LF and LFO can result in a similar effect, fTl fl → 1 and fTj fl → −1 or 0(l 6= j), and both try to decorrelate latent features.
Our softmax constraint in Eq. (5) shows practical advantages in flexibility and stability. Eq. (3) is called a soft orthogonal constraint, but is still strict enough to force the features to be orthogonal. If d is larger than underlying structures that are hidden and unknown, all features are forcibly orthogonalized and the resultant features may not be appropriate. Softmax formulation allows off-diagonal elements to be non-zero and alleviates the problem of strict orthogonality.
Partial derivatives of LF and LFO with respect to zjl = fTj fl are calculated as ∂LF ∂zjl = − 1τ2 δjl + 1 τ2 exp(zjl/τ2)∑d j exp(zjl/τ2) and ∂LFO∂zjl = −2δjl + 2zjl, where δjl is an indicator function. Since the derivatives
nearly equal zero due to zjl = 1 in the case of j = l, we focus on the case of j 6= l. When j 6= l, the ranges of partial derivatives are 0 ≤ ∂LF∂zjl ≤ 1 τ2
and −2 ≤ ∂LFO∂zjl ≤ 2. The monotonicity of LF can lead to more stable convergence. The advantages of LF are confirmed by experiments in section 4.
3.3 OBJECTIVE FUNCTION AND LEARNING MODEL
Combining instance discrimination and feature decorrelation learning, we formulate our objective function LIDFD as follows:
LIDFD = LI + αLF , (6)
Where α is a weight that balances the contributions of two terms LI and LF .
Figure 1 shows the learning process for the motif of image clustering. Input images X are converted into feature representations V in a lower d-dimensional latent space, via nonlinear mapping with deep neural networks such as ResNet He et al. (2016). The d-dimensional vectors are simultaneously learned through instance discrimination and feature decorrelation. A clustering method, such as classical k-means clustering, is then used on the learned representations to obtain the clustering results.
Optimization can be performed by mini-batch training. To compute the probability P (i|v) in Eq. (1), {vj} is needed for all images. Like Wu et al. (2018); Xiao et al. (2017), we maintain a feature memory bank for storing them. For Q(l|f) in Eq. (4), all {fm} of d dimensions in the current mini-batch can be obtained, we simply calculate the Q(l|f) within the mini-batches. We combine LI and LFO to formulate an alternative loss LIDFO in E.q. (7),
LIDFO = LI + αLFO. (7)
We refer to representation learning using LIDFD, LIDFO, and LI loss as instance discrimination and feature decorrelation (IDFD), instance discrimination and feature orthogonalization (IDFO), and instance discrimination (ID), respectively.
3.4 CONNECTION WITH SPECTRAL CLUSTERING
We explain the connection between IDFD and spectral clustering. We consider a fully connected graph consisting of all representation points, and the similarity matrix W and degree matrix D can be written as Wij = exp(vTi vj/τ) and Dii = ∑n m exp(v T i vm/τ). The loss function of spectral clustering Shaham et al. (2018) can be reformulated as
LSP = (Tr)(fLf) = 1
2 ∑ k n∑ ij wij(f k i − fkj )2 = 1 2 ∑ k n∑ ij exp ( vTi vj τ ) ||vi − vj ||2, (8)
where L is Laplacian matrix, f are feature vectors. Spectral clustering is performed by minimizing LSP subject to orthogonal condition of f , and when LSP takes minimum value f become eigenvectors of Laplacian L. According to Section 3.2, minimizing LF can approximate the orthogonal condition. Under this condition, minimizing LI can approximate the minimizing LSP , which is explained as follows.
According to Eq.(2), minimizing loss LI means maximizing vTi vi and minimizing v T i vj . When i = j, we have ||vi − vj ||2 = 0, LSP becomes zero. We need consider only the influence on LSP from minimizing vTi vj . As v are normalized, LSP can be rewritten using cosine metric as
LSP = n∑ ij exp ( cos θ τ ) sin2 θ 2 , (9)
then ∂LSP∂θ can be calculated as
∂LSP ∂θ = 1 τ sin θ(τ − 1 + cos θ) exp
( cos θ
τ
) . (10)
According to Eq.(10), we get ∂LSP∂θ ≥ 0 when τ ≥ 2. This means LSP monotonically decreases when we minimize vTi vj . Therefore, the impact from minimizing v T i vj is good for minimizing LSP . Even if τ is a little smaller than 2, because τ controls the scale of derivatives and the range of θ where the derivative is negative, large τ decreases the scale and narrows the range, resulting in a small influence on the total loss. From this viewpoint, the effectiveness of minimizing LI using large τ is approximately the same as that of LSP . By adding feature decorrelation constraints, IDFD becomes analogous to spectral clustering.
4 EXPERIMENTS
We conducted experiments using five datasets: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). We adopted ResNet18 He et al. (2016) as the neural network architecture in our main experiments. The same architecture is used for all datasets. Our experimental settings are in accordance with that of Wu et al. (2018). Data augmentation strategies often used on images are also adopted in experiments. Details about datasets and experimental setup are given in Appendix A.
For IDFD, the weight α is simply fixed at 1. Orthogonality constraint weights for IDFO were α = 10 on CIFAR-10 and CIFAR-100, and α = 0.5 on STL-10 and ImageNet subsets. The weight α was set according to the orders of magnitudes of losses. In the main experiments, we set temperature parameter τ = 1 for IDFO and IDFD, and τ2 = 2 for IDFD. In order to fully investigate our work, we also constructed two versions of instance discrimination (ID) that uses only LI loss, ID(original) with small τ = 0.07 and ID(tuned) with large τ = 1.
We compared ID(tuned), IDFO, and IDFD with ID(original) and six other competitive methods, clustering with an autoencoder (AE) Hinton & Salakhutdinov (2006), deep embedded clustering (DEC) Xie et al. (2016), deep adaptive image clustering (DAC) Chang et al. (2017), deep comprehensive correlation mining (DCCM) Wu et al. (2019), invariant information clustering (IIC) Ji et al. (2019), and semantic clustering by adopting nearest neighbors (SCAN) Van Gansbeke et al. (2020) .We use three metrics to measure clustering performance: standard clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI). These metrics give values in [0, 1], with higher scores indicating more accurate clustering assignments.
4.1 MAIN RESULTS
Table 1 lists the best performances for each method. The results for the four methods AE, DEC, DAC, and DCCM are cited from Wu et al. (2019), and results for two methods IIC and SCAN are cited from Van Gansbeke et al. (2020). Comparing these results, we conclude that ID(tuned), IDFO, and IDFD, clearly outperform these methods excluding SCAN for all datasets, according to the metrics ACC, NMI, and ARI. For dataset CIFAR-10, ID(tuned), IDFO, and IDFD yielded ACC values of 77.6%, 82.8%, and 81.5%, respectively. For dataset ImageNet-10, ID(tuned), IDFO, and IDFD achieved ACC values of 93.7%, 94.2%, and 95.4%. The high performance is comparable with that of supervised and semi-supervised methods. Gaps between the results of ID(tuned) and those of IDFO and IDFD reflect the effect of the feature constraint term. The performance is improved for all datasets by introducing feature orthogonalization and decorrelation. Impressively, ID(tuned) significantly outperformed ID(original) on all datasets, showing strong impact of temperature parameter. This will be discussed separately in section 4.2.1.
In addition, we note that IDFD differs from SCAN in that IDFD focuses on the representation leaning while SCAN focuses on clustering by given a representation learning. Both SCAN and IDFD demonstrate significant improvement on performance compared with other methods. Results of IDFD and SCAN showed effectiveness of efforts on both representation learning and clustering phases of deep clustering.
We also examine the learning stability of ID(tuned), IDFO, and IDFD. Figure 2 illustrates the accuracy on CIFAR-10 running each of ID(tuned), IDFO, and IDFD. We can see that both IDFO and IDFD obtained higher peak ACC values than ID(tuned). In particular, IDFD yielded higher performance than ID over the entire learning process. IDFO performed better than the other two methods and obtained the highest ACC value in earlier epochs. However, the ACC widely fluctuated
over the learning process and dropped in later epochs. As analyzed in 3.2, our proposed IDFD makes performance higher than ID and more stable than IDFO.
4.2 DISCUSSION
4.2.1 ANALYSIS ON TEMPERATURE PARAMETER
Gaps between results of ID(original) and ID(tuned) in Table 1 show strong impact of temperature parameter. We theoretically and intuitively analyze the essential change caused by the temperature parameter in this subsection.
First, we consider why instance-level discrimination works and under what conditions. Difference in the performance of ID(original) and ID(tuned) suggests optimal distribution in latent space changes with the magnitude of τ . According to empirical investigation and theoretical analysis, we find that a large τ in LI encourages data points to follow a compact distribution when minimizing the loss, while a small τ drives them to follow a uniform distribution. This means minimizing LI with a large τ can reach a good clustering-friendly solution. This property was explained by demonstrating examples and calculation, details are given in Appendix B.
In the definition of P (i|v) in Eq. (1), when τ is small, we compute softmax on larger logits, resulting in higher prediction, and obtain a more confident model. From this viewpoint, we can leverage a small τ to decrease class entanglement if we can learn an accurate class-weight vector. In the general classification problem, since the weight of each class can be learned according to the real labels, it is preferable for models to be more confident. Most works therefore recommend setting a small value, such as τ = 0.07 Wu et al. (2018). In clustering, however, instance-level discrimination is used to learn similarity among samples, with only one sample in each class. Because the model is highly confident, each sample tends to be completely independent from each other. Similarity among samples is seemingly encouraged to approach close to zero, even for samples from the same class. This clearly deviates from the original intent of adopting instance-level discrimination to learn sample entanglements under the condition that each sample can be discriminative. A larger τ than that used for classification is thus needed.
More experiments over different temperature settings on ID and IDFD were conducted on CIFAR-10. Figure 3 shows the accuracy of ID for τ = {0.07, 0.2, 0.5, 0.8, 1, 2, 5, 10}. We calculated the mean and standard deviation of ACC values over the last 500 epochs for each experiment. From the results, we can see that ID can suffer significant performance degradation when τ is too small or too large. This agrees with our analysis above. We also investigate the impact of τ2 by fixing τ = 1. Figure 4 shows the accuracy of the IDFD for τ2 = {0.1, 0.5, 1, 2, 3, 4, 5, 10}. Experimental results show that IDFD is relatively robust to the parameter τ2 and enables stable representation learning.
4.2.2 REPRESENTATION DISTRIBUTION AND FEATURE BEHAVIOR
Figure 5 visualizes the results of representations learned in four experiments: (a) ID(original), (b) ID(tuned), (c) IDFO with τ = 1 and α = 10, and (d) IDFD with τ = 1, τ2 = 2, and α = 1 on CIFAR10. 128-dimension representations were embedded into two dimensions by t-SNE (t-distributed stochastic neighbor embedding) Maaten & Hinton (2008). Colors indicate ground truth classes. The distributions for the ID(original) and ID(tuned) again show the significant difference between
them. Data distribution when τ = 1 is apparently more clustering-friendly than when τ = 0.07. Furthermore, compared with ID(tuned), IDFO and IDFD can separate samples from different classes with certain margins. IDFO tended to construct a patch-like distribution within one class. In contrast, IDFD maintained a tighter connection among samples of the same class and more distinct borders between different classes.
Figure 6 shows distribution of feature representations on ImageNet-10 learned by IDFD. We can see that representations of ImageNet-10 are clustering-friendly and even better than that of CIFAR-10. This is consistent with the results in Table 1 evaluated by metrics ACC, NMI, and ARI. In addition to that, we also plot sample images corresponding to points lying near the border between clusters. We can see that these samples are certainly similar in appearance.
We investigate the effects of orthogonal and decorrelation constraintsLFO andLF . Figure 7 illustrates the feature correlations of ID(tuned), IDFO, and IDFD on dataset CIFAR-10. We see that IDFO clearly decorrelates features and IDFD retains a moderate level of feature correlation between ID and IDFD. Taken together with Figure 2, these results suggest that the softmax formulation of IDFD alleviates the problem of strict orthogonality and enables stable representation learning.
4.2.3 INVESTIGATION FOR PRACTICAL USE
We investigate the dependencies of our method on networks through experiments on other networks: ConvNet Wu et al. (2019), VGG16 Simonyan & Zisserman (2014), and ResNet34 He et al. (2016). Performance was evaluated using the CIFAR-10 dataset. Results listed in Table 2 show that IDFD
can work on various networks. IDFD outperforms ID(tuned), and FD term shows more obvious effect on these networks. We also confirm the effect of cooperation between LI and LF from the viewpoint of spectral clustering, combinations of AE and LF were evaluated in terms of clustering performance. We found that AE cannot benefit from LF as LI did. This result verified that LF has a deep relation with LI , and IDFD is not a simple combination. We also investigate the importance of data augmentation in performance through experiments. Due to the page limit, our extended experiments are given in Appendix C.
5 CONCLUSION
We present a clustering-friendly representation learning method combining instance discrimination and feature decorrelation based on spectral clustering properties. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We analyzed why instance discrimination works for clustering and clarified the conditions. We designed a softmax-formulated feature decorrelation constraint for learning the latent space to realize stable improvement of clustering performance. We also explained the connection between our method and spectral clustering. The proposed representation learning method achieves accuracies comparable to state-of-the-art values on the CIFAR-10 and ImageNet-10 datasets with simple k-means. We also verified IDFD loss works on multiple neural network structures, and our method is expected to be effective for various kinds of problems.
A DATASETS AND EXPERIMENTAL SETUP
Five datasets were used to conduct experiments: CIFAR-10 Krizhevsky et al. (2009), CIFAR100 Krizhevsky et al. (2009), STL-10 Coates et al. (2011), ImageNet-10 Deng et al. (2009), and ImageNet-Dog Deng et al. (2009). Table 3 lists the numbers of images, number of clusters, and image sizes of these datasets. Specifically, the training and testing sets of dataset STL-10 were jointly used in our experiments. Images from the three ImageNet subsets were resized to 96× 96× 3.
We adopted ResNet He et al. (2016) as the neural network architecture in our main experiments. For simplicity, we used ResNet18, which according to our preliminary experiments yields sufficiently high performance. The same architecture was used for all datasets except the input layer. In accordance with the experimental settings of Wu et al. (2018), the dimension of latent feature vectors was set to d = 128, and a stochastic gradient descent optimizer with momentum β = 0.9 was used. The learning rate lr was initialized to 0.03, then gradually scaled down after the first 600 epochs using a coefficient of 0.1 every 350 epochs. The total number of epochs was set to 2000, and the batch size was set to B = 128. Orthogonality constraint weights for IDFO were α = 10 for CIFAR-10 and CIFAR-100 and α = 0.5 for the STL-10 and ImageNet subsets. The weight for IDFO α was set according to the orders of magnitudes of the two losses LI and LFO. For IDFD, the weight α was simply fixed at 1. In the main experiments, we set the default temperature parameter value τ = 1 for ID(tuned), IDFO, and IDFD, and τ2 = 2 for IDFD.
B OPTIMAL SOLUTIONS OF CLUSTERING AND INSTANCE DISCRIMINATION
In Section 4.2.1, we concluded that minimizing LI under the condition that τ is large can reach a clustering-friendly solution. Details about the analysis and calculation was demonstrated by a two-dimensional toy model as follows.
Empirically, we observe that visually similar images tend to get similar assignment probabilities. Similar images can thus be projected to close locations in the latent space. This also motivated ID Wu et al. (2018). In the case of ID, similar images xi and xj yield respective highest probabilities pii and pjj , and also receive relatively high pij and pji values. This property can retain over the process of approximation to the optimal solution. Because instance-level discrimination tries to maximally scatter embedded features of instances over the unit sphere Wu et al. (2018), all representations are thus uniformly spread over the latent space with each representation relatively similar to its surroundings, we call this uniform case. We also consider another case that yields an optimal clustering solution where all samples from the same class are compacted to one point and k clusters are uniformly spread over the space. We call this compact case. Figure 8 shows the representation distributions in the two cases. Because we normalize v, two-dimensional representations form a circle.
In the uniform case, n representations are uniformly located on a circle with an angular interval of θ = 2π/n, and the inner product between two neighboring representations is cos θ. Without loss of generality, we can start with an arbitrary point vi and orderly mark all samples as vi+j . The cosine similarity between vi and vi+j can then be calculated by vTi+jvi = cos jθ. Accordingly, the loss
Figure 8: Two extreme cases of representation distributions over two-dimensional space. Left: uniform. Right: compact.
Figure 9: exp(cos θ/τ) with different τ settings.
contributed by sample i in the uniform case can be calculated as
Liuniform = − log exp(1/τ)∑n−1
m=0 exp(cosmθ/τ) = − log
1 n exp(1/τ)
1 n ∑n−1 m=0 exp(cosmθ/τ) . (11)
Similarly, in the compact case, n/k data from the same class are exactly compacted to a point and k corresponding points located on a circle at an angular interval of θ′ = 2π/k. The inner product between an arbitrary start sample vi and the j-th sample can be calculated as vTi vi+j = cos lθ′, where l = j mod n/k. The probability of assigning i to the cluster with j becomes pij = exp(cos θ′/τ)∑k−1 c=0 n k exp(cos cθ ′/τ) . Accordingly, the loss contributed by sample i in the compact case can be calculated as
Licompact = − log exp(1/τ)∑k−1
c=0 n k exp(cos cθ
′/τ) = − log
1 n exp(1/τ)
1 k ∑k−1 c=0 exp(cos cθ ′/τ) . (12)
Comparing Eq. (11) and (12), we see that the difference between Liuniform and L i compact comes
only from the denominator part of the logarithm. These are two discrete forms of the same integral∫ exp(cos θ/τ)dθ. Clearly, Liuniform equals L i compact when k, n → +∞. We therefore need to consider only the general case where n is sufficiently large and k n. Figure 9 shows a plot of function values exp( cos θτ ) with different τ settings over the domain θ ∈ [0, 2π]. We can see that the curve becomes flatter as τ increases. A flat function f means that for an arbitrary (θ, θ′) pair in its domain of definition, we have f(θ) ≈ f(θ′). In this situation even k n, the difference between the summations of these two discrete functions is not large. Accordingly, we can say Licompact is approximate to L i uniform for a large τ . In other words, minimizing LI can approach the compact situation where same-class samples assemble and differing samples separate. Learning instance-level discrimination for clustering is therefore reasonable.
C EXTENDED EXPERIMENTS
In Section 4.2.3, we have reported some investigations of our method for practical use. Details about several important experiments are supplemented as follows.
C.1 IMPACT OF NETWORK ARCHITECTURE
As Table 2 shows, IDFD can be applied to various networks, and the performance gaps between IDFD and ID(turned) on networks like ConvNet Wu et al. (2019) and VGG16 Simonyan & Zisserman (2014) are more significant than on ResNet He et al. (2016). We added the feature correlation matrix of VGG16 in Figure 10. IDFD on VGG16 obtained sparse correlations similar to the case of ResNet18 in Figure 7, while ID on VGG16 obtained denser and stronger correlations than ResNet18, presumably constructing redundant features that degraded clustering. In the case of VGG16, the feature decorrelation term LF exhibits a larger effect on clustering performance than that of ResNet.
Our proposed losses work on all network architectures, and we expect to introduce the losses to various networks that are suitable for individual problems.
C.2 COMBINATION OF AUTOENCODER AND FEATURE DECORRELATION
In order to further confirm the cooperation effect of instance discrimination and feature decorrelation from the viewpoint of spectral clustering, a combination of autoencoder and feature decorrelation was evaluated in terms of clustering performance. Autoencoder has been verified by datasets such as handwritten digits to be an effective method for deep clustering. In this experiment, we used ConvNet Wu et al. (2019) for the autoencoder architecture and trained it on the CIFAR-10 dataset. We applied k-means to representations learned from autoencoder only and autoencoder combined with feature decorrelation, which are called AE and AEFD, respectively. According to our experiments, the ACC value of AE was 26.0%, and the ACC value of AEFD was 22.4%. Compared to the improvement from ID to IDFD (from 26.8% to 42.0% as shown in Table 2), we see that AE cannot benefit from FD as ID. This result again indicates that FD has a deep relation with ID as we analyzed in Section 3.
C.3 IMPACT OF DATA AUGMENTATION
For reproduction of our results and practical use, we note that data augmentation (DA) has strong impact on the performance. DA is known to have impact on image classification and representation learning. Like in Wu et al. (2018), several generic and accepted techniques, such as cropping and grayscale, were used for data augmenting in this work. The details of the augmentation in the original code can be linked to Wu et al. (2018). In order to investigate the impact of DA, we conducted experiments on five datasets with and without DA and compared their clustering results. Table 4 shows the results. We can see that methods without DA suffered significant performance degradations for clustering, as well as for classification Chen et al. (2020). This reminds us not to ignore the effects of DA in practical use.
To further find out main factors affecting the performance, we also executed experiments by removing each technique used for DA. Take the example of CIFAR-10, techniques used for data augmentation include: ColorJitter, RandomResizedCrop, RandomGrayscale, and RandomHorizontalFlip. All these techniques are generic and easy to be implemented. They have been integrated into general deep learning frameworks such as PyTorch. According to our experimental results as shown in Figure 11, we find that RandomResizedCrop, RandomGrayscale, and ColorJitter have strong effect on image clustering.
For practice, we also applied IDFD to our private images produced by manufacturing process. Generic DA like above were used to these images. IDFD showed good performance on these images according to our experiments. This indicates that our method can be simply applied to practical images. For other types of data such as text and time series, corresponding data augmentation techniques are needed to cooperate with our method. | 1. What are the main contributions of the paper in the field of 'deep clustering'?
2. What are some positive aspects of the paper, such as its connections to spectral clustering and detailed evaluation?
3. How do the two proposed methods, IDFO and IDFD, compare in terms of performance, and how does one determine which method to use for a given dataset?
4. Why was the alpha parameter set to 1 for IDFD, and how does one determine the appropriate value for this parameter for different datasets?
5. How does data augmentation affect the performance of the model, and are the results in the main text inclusive of this process?
6. How well does the method work on non-image data, and how does it compare to other works in the literature that have explored this?
7. Which set did the ACC calculation in Fig. 2 use, the validation set or the test set?
8. How did resizing the ImageNet images affect performance, and can the model handle larger images?
9. Are there any formatting issues in Table 3 that need to be addressed? | Review | Review
One of the main contributions is this idea of feature decorrelation where they encourage the representation features to be independent / orthogonal. The other is instance discrimination. This aims to capture the similarity between individual data points. Both of these are interesting contributions to the field of 'deep clustering'.
Besides the stated contributions, I thought there were a number of other positive aspects of this. A) I thought that the spectral clustering connection was nice and I am glad the authors included it. B) The evaluation is fairly detailed. I particularly appreciate the fact that the authors used datasets that are somewhat larger than often used in the literature (MNIST and CIFAR-10 vs CIFAR-100 and ImageNet-10). The inclusion of the study of the temperature parameter also helped clarify a few questions I had when reading it. C) Finally, the evaluation clearly shows the benefit of their contributions in terms of performance.
There are a number of questions I have with the work as is. A) Given the two methods proposed, IDFO, IDFD, neither of which outperforms the other on all tasks, and given this is unsupervised learning, how does one know which method to use? B) Why was the alpha parameter set to 1 for IDFD? How does one know what to set this to for different datasets? If it's always 1, why is it included at all? This is particularly important to understand in unsupervised settings. C) The impact of data augmentation is discussed in the supplementary but this is stated as being extremely important to the performance of the model. It is unclear to me whether the results in the main text include the augmentation process? If so, then given this, I think it should be stated in the main text as it has an effect on both instance discrimination and feature decorrelation considering the addition of augmented images. The results in supplementary Table 4 include KNN and don't match up with the main results in the main text which further confused me. D) I was left wondering how well this method works on non-image data? Other works in the literature have explored this. E) For Fig. 2 is this ACC calculated on the validation set or test set? F) What were the effects of resizing the ImageNet images? Can this model handle larger images, and if so, how does this effect performance?
Minor A) References are badly formatted in Table 3.
Overall, my questions above notwithstanding, I think this is an interesting contribution which shows the benefit of instance discrimination and feature decorrelation for deep clustering. |
ICLR | Title
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Abstract
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE’s latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256×256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.
1 INTRODUCTION
Deep generative learning is a central problem in machine learning. It has found diverse applications, ranging from image (Brock et al., 2018; Karras et al., 2019; Razavi et al., 2019), music (Dhariwal et al., 2020) and speech (Ping et al., 2020; Oord et al., 2016a) generation, distribution alignment across domains (Zhu et al., 2017; Liu et al., 2017; Tzeng et al., 2017) and semi-supervised learning (Kingma et al., 2014; Izmailov et al., 2020) to 3D point cloud generation (Yang et al., 2019), light-transport simulation (Müller et al., 2019), molecular modeling (Sanchez-Lengeling & AspuruGuzik, 2018; Noé et al., 2019) and equivariant sampling in theoretical physics (Kanwar et al., 2020).
Among competing frameworks, likelihood-based models include variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016), autoregressive models (Oord et al., 2016b), and energy-based models (EBMs) (Lecun et al., 2006; Salakhutdinov et al., 2007). These models are trained by maximizing the data likelihood under the model, and unlike generative adversarial networks (GANs) (Goodfellow et al., 2014), their training is usually stable and they cover modes in data more faithfully by construction.
Among likelihood-based models, EBMs model the unnormalized data density by assigning low energy to high-probability regions in the data space (Xie et al., 2016; Du & Mordatch, 2019). EBMs are appealing because they require almost no restrictions on network architectures (unlike normalizing flows) and are therefore potentially very expressive. They also exhibit better robustness and out-of-distribution generalization (Du & Mordatch, 2019) because, during training, areas with high probability under the model but low probability under the data distribution are penalized explicitly. However, training and sampling EBMs usually requires MCMC, which can suffer from slow mode mixing and is computationally expensive when neural networks represent the energy function.
∗Work done during an internship at NVIDIA
On the other hand, VAEs are computationally more efficient for sampling than EBMs, as they do not require running expensive MCMC steps. VAEs also do not suffer from expressivity limitations that normalizing flows face (Dupont et al., 2019; Kong & Chaudhuri, 2020), and in fact, they have recently shown state-of-the-art generative results among non-autoregressive likelihood-based models (Vahdat & Kautz, 2020). Moreover, VAEs naturally come with a latent embedding of data that allows fast traverse of the data manifold by moving in the latent space and mapping the movements to the data space. However, VAEs tend to assign high probability to regions with low density under the data distribution. This often results in blurry or corrupted samples generated by VAEs. This also explains why VAEs often fail at out-of-distribution detection (Nalisnick et al., 2019).
In this paper, we propose a novel generative model as a symbiotic composition of a VAE and an EBM (VAEBM) that combines the best of both. VAEBM defines the generative distribution as the product of a VAE generator and an EBM component defined in pixel space. Intuitively, the VAE captures the majority of the mode structure in the data distribution. However, it may still generate samples from low-probability regions in the data space. Thus, the energy function focuses on refining the details and reducing the likelihood of non-data-like regions, which leads to significantly improved samples.
Moreover, we show that training VAEBM by maximizing the data likelihood easily decomposes into training the VAE and the EBM component separately. The VAE is trained using the reparameterization trick, while the EBM component requires sampling from the joint energy-based model during training. We show that we can sidestep the difficulties of sampling from VAEBM, by reparametrizing the MCMC updates using VAE’s latent variables. This allows MCMC chains to quickly traverse the model distribution and it speeds up mixing. As a result, we only need to run short chains to obtain approximate samples from the model, accelerating both training and sampling at test time.
Experimental results show that our model outperforms previous EBMs and state-of-the-art VAEs on image generation benchmarks including CIFAR-10, CelebA 64, LSUN Church 64, and CelebA HQ 256 by a large margin, reducing the gap with GANs. We also show that our model covers the modes in the data distribution faithfully, while having less spurious modes for out-of-distribution data. To the best of knowledge, VAEBM is the first successful EBM applied to large images.
In summary, this paper makes the following contributions: i) We propose a new generative model using the product of a VAE generator and an EBM defined in the data space. ii) We show how training this model can be decomposed into training the VAE first, and then training the EBM component. iii) We show how MCMC sampling from VAEBM can be pushed to the VAE’s latent space, accelerating sampling. iv) We demonstrate state-of-the-art image synthesis quality among likelihood-based models, confirm complete mode coverage, and show strong out-of-distribution detection performance.
2 BACKGROUND
Energy-based Models: An EBM assumes pψ(x) to be a Gibbs distribution of the form pψ(x) = exp (−Eψ(x)) /Zψ , where Eψ(x) is the energy function with parameters ψ and Zψ =∫ x exp (−Eψ(x)) dx is the normalization constant. There is no restriction on the particular form of Eψ(x). Given a set of samples drawn from the data distribution pd(x), the goal of maximum likelihood learning is to maximize the log-likelihood L(ψ) = Ex∼pd(x) [log pψ(x)], which has the derivative (Woodford, 2006):
∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼pψ(x) [∂ψEψ (x)] (1)
For the first expectation, the positive phase, samples are drawn from the data distribution pd(x), and for the second expectation, the negative phase, samples are drawn from the model pψ(x) itself. However, sampling from pψ(x) in the negative phase is itself intractable and approximate samples are usually drawn using MCMC. A commonly used MCMC algorithm is Langevin dynamics (LD) (Neal, 1993). Given an initial sample x0, Langevin dynamics iteratively updates it as:
xt+1 = xt − η
2 ∇xEψ(xt) +
√ ηωt, ωt ∼ N (0, I), (2)
where η is the step-size.1 In practice, Eq. 2 is run for finite iterations, which yields a Markov chain with an invariant distribution approximately close to the original target distribution.
1In principle one would require an accept/reject step to make it a rigorous MCMC algorithm, but for sufficiently small stepsizes this is not necessary in practice (Neal, 1993).
Variational Autoencoders: VAEs define a generative model of the form pθ(x, z) = pθ(z)pθ(x|z), where z is the latent variable with prior pθ(z), and pθ(x|z) is a conditional distribution that models the likelihood of data x given z. The goal of training is to maximize the marginal log-likelihood log pθ(x) given a set of training examples. However since the marginalization is intractable, instead, the variational lower bound on log pθ(x) is maximized with qφ(z|x) as the approximate posterior:
log pθ(x) ≥ Ez∼qφ(z|x) [log pθ(x|z)]−DKL [qφ(z|x)‖pθ(z)] := Lvae(x, θ, φ). (3)
The state-of-the-art VAE, NVAE (Vahdat & Kautz, 2020), increases the expressivity of both prior and approximate posterior using hierarchical latent variables (Kingma et al., 2016) where z is decomposed into a set of disjoint groups, z = {z1, z1, . . . , zL}, and the prior pθ(z) = ∏ l pθ(zl|z<l)
and the approximate posterior qφ(z|x) = ∏ l qφ(zl|z<l,x) are defined using autoregressive distributions over the groups. We refer readers to Vahdat & Kautz (2020) for more details.
3 ENERGY-BASED VARIATIONAL AUTOENCODERS
One of the main problems of VAEs is that they tend to assign high probability to regions in data space that have low probability under the data distribution. To tackle this issue, we propose VAEBM, a generative model constructed by the product of a VAE generator and an EBM component defined in the data space. This formulation allows our model to capture the main mode structure of the data distribution using the VAE. But when training the joint VAEBM, in the negative training phase we sample from the model itself and can discover non-data-like samples, whose likelihood is then reduced by the energy function explicitly. The energy function defined in the pixel space also shares similarities with discriminator in GANs, which can generate crisp and detailed images.
Formally, we define the generative model in VAEBM as hψ,θ(x, z) = 1Zψ,θ pθ(x, z)e −Eψ(x) where pθ(x, z) = pθ(z)pθ(x|z) is a VAE generator and Eψ(x) is a neural network-based energy function, operating only in the x space, and Zψ,θ = ∫ pθ(x)e
−Eψ(x)dx is the normalization constant. VAEBM is visualized in Fig. 1. Marginalizing out the latent variable z gives
hψ,θ(x) = 1
Zψ,θ
∫ pθ(x, z)e −Eψ(x)dz = 1
Zψ,θ pθ(x)e
−Eψ(x). (4)
Given a training dataset, the parameters of VAEBM, ψ, θ, are trained by maximizing the marginal log-likelihood on the training data:
log hψ,θ(x) = log pθ(x)− Eψ(x)− logZψ,θ (5) ≥ Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)||p(z))︸ ︷︷ ︸
Lvae(x,θ,φ)
−Eψ(x)− logZψ,θ︸ ︷︷ ︸ LEBM(x,ψ,θ) , (6)
where we replace log pθ(x) with the variational lower bound from Eq. 3. Eq. 6 forms the objective function for training VAEBM. The first term corresponds to the VAE objective and the second term corresponds to training the EBM component. Next, we discuss how we can optimize this objective.
3.1 TRAINING
The LEBM(x, ψ, θ) term in Eq. 6 is similar to the EBM training objective except that the log partition function depends on both ψ and θ. We show in Appendix A that logZψ,θ has the gradients ∂ψ logZψ,θ = Ex∼hψ,θ(x,z) [−∂ψEψ (x)] and ∂θ logZψ,θ = Ex∼hψ,θ(x,z) [∂θ log pθ(x)] . The first gradient can be estimated easily by evaluating the gradient of the energy function at samples drawn from the VAEBM model hψ,θ(x, z) using MCMC. However, the second term involves computing the intractable ∂∂θ log pθ(x). In Appendix A, we show that estimating ∂ ∂θ log pθ(x) requires sampling from the VAE’s posterior distribution, given model samples x ∼ hψ,θ(x, z). To avoid the computational complexity of estimating this term, for example with a second round of MCMC, we propose a two-stage algorithm for training VAEBM. In the first stage, we train the VAE model in our VAEBM by maximizing the Lvae(x, θ, φ) term in Eq. 6. This term is identical to the VAE’s objective, thus, the parameters θ and φ are trained using the reparameterized trick as in Sec. 2. In the second stage, we keep the VAE model fixed and only train the EBM component. Since θ is now fixed, we only require optimizing LEBM(x, ψ, θ) w.r.t. ψ, the parameters of the energy function. The gradient of L(ψ) = Ex∼pd [LEBM(x, ψ, θ)] w.r.t. ψ is: ∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼hψ,θ(x,z) [∂ψEψ (x)] , (7) which decomposes into a positive and a negative phase, as discussed in Sec. 2.
Reparametrized sampling in the negative phase: For gradient estimation in the negative phase, we can draw samples from the model using MCMC. Naively, we can perform ancestral sampling, first sampling from the prior pθ(z), then running MCMC for pθ(x|z)e−Eψ(x) in x-space. This is problematic, since pθ(x|z) is often sharp and MCMC cannot mix when the conditioning z is fixed. In this work, we instead run the MCMC iterations in the joint space of z and x. Furthermore, we accelerate the sampling procedure using reparametrization for both x and the latent variables z. Recall that when sampling from the VAE, we first sample z ∼ p(z) and then x ∼ pθ(x|z). This sampling scheme can be reparametrized by sampling from a fixed noise distribution (e.g., ( z, x) ∼ p = N (0, I)) and deterministic transformations Tθ such that
z = T zθ ( z), x = T x θ (z( z), x) = T x θ (T z θ ( z), x). (8)
Here, T zθ denotes the transformation defined by the prior that transforms noise z into prior samples z and Txθ represents the decoder that transforms noise x into samples x, given prior samples z. We can apply the same reparameterization when sampling from hψ,θ(x, z). This corresponds to sampling ( x, z) from the “base distribution”:
hψ,θ ( x, z) ∝ e−Eψ(T x θ (T z θ ( z), x))p ( x, z) , (9)
and then transforming them to x and z via Eq. 8 (see Appendix B for derivation). Note that z and x have the same scale, as p ( x, z) is a standard Normal distribution, while the scales of x and z can be very different. Thus, running MCMC sampling with this reparameterization in the ( x, z)space has the benefit that we do not need to tune the sampling scheme (e.g., step size in LD) for each variable. This is particularly helpful when z itself has multiple groups, as in our case.
The advantages of two-stage training: Besides avoiding the difficulties of estimating the full gradient of logZψ,θ, two-stage training has additional advantages. As we discussed above, updating ψ is computationally expensive, as each update requires an iterative MCMC procedure to draw samples from the model. The first stage of our training minimizes the distance between the VAE model and the data distribution, and in the second stage, the EBM further reduce the mismatch between the model and the data distribution. As the pre-trained VAE pθ(x) provides a good approximation to pd(x) already, we expect that a relatively small number of expensive updates for training ψ is needed. Moreover, the pre-trained VAE provides a latent space with an effectively lower dimensionality and a smoother distribution than the data distribution, which facilitates more efficient MCMC.
Alternative extensions: During the training of the energy function, we fix the VAE’s parameters. In Appendix C, we discuss a possible extension to our training objective that also updates the VAE.
4 RELATED WORK
Early variants of EBMs include models whose energy is defined over both data and auxiliary latent variables (Salakhutdinov & Hinton, 2009; Hinton, 2012), and models using only data variables (Hinton, 2002; Mnih & Hinton, 2005). Their energy functions are simple and they do not scale to high
dimensional data. Recently, it was shown that EBMs with deep neural networks as energy function can successfully model complex data such as natural images (Du & Mordatch, 2019; Nijkamp et al., 2019b;a). They are trained with maximum likelihood and only model the data variable. Joint EBMs (Grathwohl et al., 2020a; Liu & Abbeel, 2020) model the joint distribution of data and labels. In contrast, our VAEBM models the joint distribution of data and general latent variables.
Besides fundamental maximum likelihood training, other techniques to train EBMs exist, such as minimizing F-divergence (Yu et al., 2020a) or Stein discrepancy (Grathwohl et al., 2020b), contrastive estimation (Gutmann & Hyvärinen, 2010; Gao et al., 2020) and denoising score matching (Li et al., 2019). Recently, noise contrastive score networks and diffusion models have demonstrated high quality image synthesis (Song & Ermon, 2019; 2020; Ho et al., 2020). These models are also based on denoising score matching (DSM) (Vincent, 2011), but do not parameterize any explicit energy function and instead directly model the vector-valued score function. We view score-based models as alternatives to EBMs trained with maximum likelihood. Although they do not require iterative MCMC during training, they need very long sampling chains to anneal the noise when sampling from the model (& 1000 steps). Therefore, sample generation is extremely slow.
VAEBM is an EBM with a VAE component, and it shares similarities with work that builds connections between EBMs and other generative models. Zhao et al. (2017); Che et al. (2020); Song et al. (2020); Arbel et al. (2020) formulate EBMs with GANs, and use the discriminator to assign an energy. Xiao et al. (2020); Nijkamp et al. (2020) use normalizing flows that transport complex data to latent variables to facilitate MCMC sampling (Hoffman et al., 2019), and thus, their methods can be viewed as EBMs with flow component. However, due to their topology-preserving nature, normalizing flows cannot easily transport complex multimodal data, and their sample quality on images is limited. A few previous works combine VAEs and EBMs in different ways from ours. Pang et al. (2020) and Vahdat et al. (2018b;a; 2020) use EBMs for the prior distribution, and (Han et al., 2020; 2019) jointly learn a VAE and an EBM with independent sets of parameters by an adversarial game.
Finally, as we propose two-stage training, our work is related to post training of VAEs. Previous work in this direction learns the latent structure of pre-trained VAEs (Dai & Wipf, 2019; Xiao et al., 2019; Ghosh et al., 2020), and sampling from learned latent distributions improves sample quality. These methods cannot easily be extended to VAEs with hierarchical latent variables, as it is difficult to fit the joint distribution of multiple groups of variables. Our purpose for two-stage training is fundamentally different: we post-train an energy function to refine the distribution in data space.
5 EXPERIMENTS
In this section, we evaluate our proposed VAEBM through comprehensive experiments. Specifically, we benchmark sample quality in Sec. 5.1, provide detailed ablation studies on training techniques in Sec. 5.2, and study mode coverage of our model and test for spurious modes in Sec. 5.3. We choose NVAE (Vahdat & Kautz, 2020) as our VAE, which we pre-train, and use a simple ResNet as energy function Eψ , similar to Du & Mordatch (2019). We draw approximate samples both for training and testing by running short Langevin dynamics chains on the distribution in Eq. 9. Note that in NVAE, the prior distribution is a group-wise auto-regressive Gaussian, and the conditional pixel-wise distributions in x are also Gaussian. Therefore, the reparameterization corresponds to shift and scale transformations. For implementation details, please refer to Appendix E.
5.1 IMAGE GENERATION
In Table 1, we quantitatively compare the sample quality of VAEBM with different generative models on (unconditional) CIFAR-10. We adopt Inception Score (IS) (Salimans et al., 2016) and FID (Heusel et al., 2017) as quantitative metrics. Note that FID reflects the sample quality more faithfully, as potential problems have been reported for IS on CIFAR-10 (Barratt & Sharma, 2018).
We observe that our VAEBM outperforms previous EBMs and other explicit likelihood-based models by a large margin. Note that introducing persistent chains during training only leads to slight improvement, while Du & Mordatch (2019) rely on persistent chains with a sample replay buffer. This is likely due to the efficiency of sampling in latent space. Our model also produces significantly better samples than NVAE, the VAE component of our VAEBM, implying a significant impact of our proposed energy-based refinement. We also compare our model with state-of-the-art GANs and
recently proposed score-based models, and we obtain comparable or better results. Thus, we largely close the gap to GANs and score-models, while maintaining the desirable properties of models trained with maximum likelihood, such as fast sampling and better mode coverage.
Qualitative samples generated by our model are shown in Fig. 2a and intermediate samples along MCMC chains in Fig. 2b. We find that VAEBM generates good samples by running only a few MCMC steps. Initializing MCMC chains from the pre-trained VAE also helps quick equilibration.
We also train VAEBM on larger images, including CelebA 64, CelebA HQ 256 (Liu et al., 2015) and LSUN Church 64 (Yu et al., 2015). We report the FID scores for CelebA 64 and CelebA HQ 256 in Tables 2 and 3. On CelebA 64, our model obtains results comparable with the best GANs. Although our model obtains worse results than some advanced GANs on CelebA HQ 256, we significantly
reduce the gap between likelihood based models and GANs on this dataset. On LSUN Church 64, we obtain FID 13.51, which significantly improves the NVAE baseline FID 41.3. We show qualitative samples in Fig. 3. Appendix H contains additional samples and MCMC visualizations.
Our model can produce impressive samples by running very short MCMC chains, however, we find that when we run longer MCMC chains than training chains, most chains stay around the local mode without traversing between modes. We believe that the non-mixing is due to the long mixing time of Langevin Dynamics Neal et al. (2011), as Nijkamp et al. (2019b;a) also observe that models trained with short-run MCMC have non-mixing long-run chains. We conjecture that mixing can be improved by training and sampling with more advanced MCMC techniques that are known to mix faster, such as HMC Neal et al. (2011), and this will be left for future work.
Table 4: Comparison for IS and FID on CIFAR10 between several related training methods.
Model IS↑ FID↓ NVAE (Vahdat & Kautz) 5.19 55.97 EBM on x (Du & Mordatch) 5.85 48.89 EBM on x, MCMC init w/ NVAE 7.28 29.32 WGAN w/ NVAE decoder 7.41 20.39 VAEBM (ours) 8.15 12.96
Table 5: Mode coverage on StackedMNIST.
Model Modes↑ KL↓ VEEGAN (Srivastava et al.) 761.8 2.173 PacGAN (Lin et al.) 992.0 0.277 PresGAN (Dieng et al.) 999.6 0.115 InclusiveGAN (Yu et al.) 997 0.200 StyleGAN2 (Karras et al.) 940 0.424 VAEBM (ours) 1000 0.087
5.2 ABLATION STUDIES
In Table 4, we compare VAEBM to several closely related baselines. All the experiments here are performed on CIFAR-10, and for simplicity, we use smaller models than those used in Table 1. Appendix F summarizes the experimental settings and Appendix G provides qualitative samples.
Data space vs. augmented space: One key difference between VAEBM and previous work such as Du & Mordatch (2019) is that our model is defined on the augmented space (x, z), while their EBM only involves x. Since we pre-train the VAE, one natural question is whether our strong results are due to good initial samples x from the VAE, which are used to launch the MCMC chains. To address this, we train an EBM purely on x as done in Du & Mordatch (2019). We also train another EBM only on x, but we initialize the MCMC chains with samples from the pre-trained NVAE instead of noise. As shown in line 3 of Table 4, this initialization helps the EBM which is defined only on x. However, VAEBM in the augmented space outperforms the EBMs on x only by a large margin.
Adversarial training vs. sampling: The gradient for ψ in Eq. 7 is similar to the gradient updates of WGAN’s discriminator (Arjovsky et al., 2017). The key difference is that we draw (approximate) samples from hψ(x) by MCMC, while WGAN draws negative samples from a generator (Che et al., 2020). WGAN updates the generator by playing an adversarial game, while we only update the energy function Eψ . We compare these two methods by training ψ and θ with the WGAN objective and initializing θ with the NVAE decoder. As shown in line 4 of Table 4, we significantly outperform the WGAN version of our model, implying the advantage of our method over adversarial training.
5.3 TEST FOR SPURIOUS OR MISSING MODES
We evaluate mode coverage on StackedMNIST. This dataset contains images generated by randomly choosing 3 MNIST images and stacking them along the RGB channels. Hence, the data distribution has 1000 modes. Following Lin et al. (2018), we report the number of covered modes and the KL divergence from the categorical distribution over 1000 categories from generated samples to true data (Table 5). VAEBM covers all modes and achieves the lowest KL divergence even compared to GANs that are specifically designed for this task. Hence, our model covers the modes more equally. We also plot the histogram of likelihoods for CIFAR-10 train/test images (Fig. 6, Appendix D) and present nearest neighbors of generated samples (Appendix I). We conclude that we do not overfit.
We evaluate spurious modes in our model by assessing its performance on out-of-distribution (OOD) detection. Specifically, we use VAEBM trained on CIFAR-10, and estimate unnormalized log hψ,θ(x) on in-distribution samples (from CIFAR-10 test set) and OOD samples from several datasets. Following Nalisnick et al. (2019), we use area under the ROC curve (AUROC) as quantitative metric, where high AUROC indicates that the model correctly assigns low density to OOD samples. In Table 6, we see that VAEBM has significantly higher AUROC than NVAE, justifying our argument that the energy function reduces the likelihood of non-data-like regions. VAEBM also performs better than IGEBM and JEM, while worse than HDGE. However, we note that JEM and HDGE are classifier-based models, known to be better for OOD detection (Liang et al., 2018).
5.4 EXACT LIKELIHOOD ESTIMATE ON 2D TOY DATA
VAEBM is an explicit likelihood model with a parameterized density function. However, like other energy-based models, the estimation of the exact likelihood is difficult due to the intractable partition
function logZ. One possible way to estimate the partition function is to use Annealed Importance Sampling (AIS) (Neal, 2001). However, using AIS to estimate logZ in high-dimensional spaces is difficult. In fact, Du & Mordatch (2019) report that the estimation does not converge in 2 days on CIFAR-10. Furthermore, AIS gives a stochastic lower bound on logZ, and therefore the likelihood computed with this estimated logZ would be an upper bound for the true likelihood. This makes the estimated likelihood hard to compare with the VAE’s likelihood estimate, which is usually a lower bound on the true likelihood (Burda et al., 2015).
As a result, to illustrate that our model corrects the distribution learned by the VAE and improves the test likelihood, we conduct additional experiments on a 2-D toy dataset. We use the 25-Gaussians dataset, which is generated by a mixture of 25 two-dimensional isotropic Gaussian distributions arranged in a grid. This dataset is also studied in Che et al. (2020). The encoder and decoder of the VAE have 4 fully connected layers with 256 hidden units, and the dimension of the latent variables is 20. Our energy function has 4 fully connected layers with 256 hidden units.
In the 2-D domain, the partition function logZ can be accurately estimated by a numerical integration scheme. For the VAE, we use the IWAE bound (Burda et al., 2015) with 10,000 posterior samples to estimate its likelihood. We use 100,000 test samples from the true distribution to evaluate the likelihood. Our VAEBM obtains the average log likelihood of -1.50 nats on test samples, which significantly improves the VAE, whose average test likelihood is -2.97 nats. As a reference, we also analytically compute the log likelihood of test samples under the true distribution, and the result is -1.10 nats.
We show samples from the true distribution, VAE and VAEBM in Figure 4. We observe that the VAEBM successfully corrects the distribution learned by the VAE and has better sample quality.
5.5 SAMPLING EFFICIENCY
Despite their impressive sample quality, denoising score matching models (Song & Ermon, 2019; Ho et al., 2020) are slow at sampling, often requiring & 1000 MCMC steps. Since VAEBM uses short MCMC chains, it takes only 8.79 seconds to generate 50 CIFAR-10 samples, whereas NCSN (Song & Ermon, 2019) takes 107.9 seconds, which is about 12× slower (see Appendix J for details).
6 CONCLUSIONS
We propose VAEBM, an energy-based generative model in which the data distribution is defined jointly by a VAE and an energy network, the EBM component of the model. In this joint model, the EBM and the VAE form a symbiotic relationship: the EBM component refines the initial VAEdefined distribution, while the VAE’s latent embedding space is used to accelerate sampling from the joint model and therefore enables efficient training of the energy function. We show that our model can be trained effectively in two stages with a maximum likelihood objective and we can efficiently sample it by running short Langevin dynamics chains. Experimental results demonstrate strong generative performance on several image datasets. Future work includes further scaling up the model to larger images, applying it to other domains, and using more advanced sampling algorithms.
B REPARAMETRIZATION FOR EBM
Suppose we draw the re-parametrization variables ( x, z) ∼ p ( x, z). For convenience, we denote Tθ( x, z) = (T x θ (T z θ ( z), x), T z θ ( z)) = (x, z). (11)
Since Tθ is a deterministic and invertible transformation that maps ( x, z) to (x, z), by the change of variables formula, we can write
pθ(x, z) = p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ , (12) where JT−1θ is the Jacobian of T −1 θ . Consider a Gaussian distribution as a simple example: if z ∼ N (µz, σz) and x|z ∼ N (µx(z), σx(z)), then z = T zθ ( z) = µz + σz · z, x = Txθ ( x, z) = µx(z) + σx(z) · x,
and
JT−1θ (x, z) = [σx(z)
−1, σ−1z ].
2Maximizing ELBO with respect to φ corresponds to minimizing DKL(qφ(z|x)||pθ(z|x)) while θ is fixed.
Recall that the generative model of our EBM is
hψ,θ(x, z) = e−Eψ(x)pθ(x, z)
Zψ,θ . (13)
We can apply the change of variable to hψ,θ(x, z) in similar manner:
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( x, z))| , (14)
where JTθ is the Jacobian of Tθ.
Since we have the relation
Jf−1 ◦ f = J−1f (15)
for invertible function f , we have that
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( z, x))| (16)
= 1
Zψ,θ e−Eψ(Tθ( x, z))pθ(Tθ( x, z)) |det (JTθ ( x, z))| (17)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ ∣∣∣ det (JTθ ( x, z)) ∣∣∣ (18) = 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) (19)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p ( x, z), (20)
which is the distribution in Eq. 9.
After we obtained samples ( x, z) from the distribution in Eq. 20, we obtain (x, z) by applying the transformation Tθ in Eq. 11.
B.1 COMPARISON OF SAMPLING IN ( x, z)-SPACE AND IN (x, z)-SPACE
Above we show that sampling from hψ,θ(x, z) is equivalent to sampling from hψ,θ( x, z) and applying the appropriate variable transformation. Here, we further analyze the connections between sampling from these two distributions with Langevin dynamics. Since each component of x and z can be re-parametrzied with scaling and translation of standard Gaussian noise, without loss of generality, we assume a variable c (c can be a single latent variable in z or a single pixel in x) and write
c = µ+ σ .
Suppose we sample in the space with energy function f on c and step size η. The update for is
t+1 = t − η
2 ∇ f +
√ ηωt, ωt ∼ N (0, I).
Now we plug t+1 into the expression of c while noting that∇ f = σ∇cf . We obtain ct+1 = µ+ σ t+1 = µ+ σ ( t − η
2 ∇ f +
√ ηωt ) = µ+ σ t − σ2η
2 ∇cf +
√ ησ2ωt
= ct − σ2η
2 ∇cf +
√ ησ2ωt.
Therefore, we see that running Langevin dynamics in ( x, z)-space is equivalent to running Langevin dynamics in (x, z)-space with step size for each component of z and x adjusted by its variance. However, considering the high dimensionality of x and z, the step size adjustment is difficult to implement.
The analysis above only considers a variable individually. More importantly, our latent variable z in the prior follows block-wise auto-regressive Gaussian distributions, so the variance of each
component in zi depends on the value of z<i. We foresee that because of this dependency, using a fixed step size per component of z will not be effective, even when it is set differently for each component. In contrast, all the components in ( x, z)-space have a unit variance. Hence, a universal step size for all the variables in this space can be used.
To further provide empirical evidence that adjusting the step size for each variable is necessary, we try sampling directly in (x, z)-space without adjusting the step size (i.e., use a universal step size for all variables). Qualitative results are presented in Figure 5. We examine several choices for the step size and we cannot obtain high-quality samples.
In conclusion, the re-parameterization provides an easy implementation to adjust step size for each variable, and the adjustment is shown to be crucial to obtain good samples.
C EXTENSION TO TRAINING OBJECTIVE
In the first stage of training VAEBM, the VAE model is trained by maximizing the training data log-likelihood which corresponds to minimizing an upper bound on DKL(pd(x)||pθ(x)) w.r.t. θ. In the second stage, when we are training the EBM component, we use the VAE model to sample from the joint VAEBM by running the MCMC updates in the joint space of z and x. Ideally, we may want to bring pθ(x) closer to hψ,θ(x) in the second stage, because when pθ(x) = hψ,θ(x), we will not need the expensive updates for ψ. We can bring pθ(x) closer to hψ,θ(x) by minimizing DKL(pθ(x)||hψ,θ(x)) with respect to θ which was recently discussed in the context of an EBMinterpretation of GANs by Che et al. (2020). To do so, we assume the target distribution hψ,θ(x) is fixed and create a copy of θ, named θ′, and we update θ′ by the gradient:
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′Ex∼pθ′ (x) [Eψ(x)] (21)
In other words, one update step for θ′ that minimizes DKL(p′θ(x)||hψ,θ(x)) w.r.t. θ′ can be easily done by drawing samples from p′θ(x) and minimizing the energy-function w.r.t. θ
′. Note that this approach is similar to the generator update in training Wasserstein GANs (Arjovsky et al., 2017). The above KL objective will encourage pθ(x) to model dominants modes in hψ,θ(x). However, it may cause pθ(x) to drop modes.
C.1 DERIVATION
Our derivation largely follows Appendix A.2 of Che et al. (2020). Note that every time we update θ, we are actually taking the gradient w.r.t θ′, which can be viewed as a copy of θ and is initialized as θ. In particular, we should note that the θ in hψ,θ(x) is fixed. Therefore, we have
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′ ∫ pθ′(x) [log pθ′(x)− log hψ,θ(x)] dx
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx
+ ∫ pθ′(x) [∇θ′ log pθ′(x)−∇θ′ log hψ,θ(x)] dx︸ ︷︷ ︸
=0
(22)
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx, (23)
where the second term in Eq. 22 is 0 because the log hψ,θ(x) does not depend on θ′ and the expectation of the score function is 0:∫
pθ′(x)∇θ′ log pθ′(x)dx = Ex∼pθ′ (x) [∇θ′ log pθ′(x)] = 0.
Recall that θ′ has the same value as θ before the update, so log pθ′(x)− log hψ,θ(x) = log [ pθ′(x)
pθ(x)e−Eψ(x)
] + logZψ,θ
= Eψ(x) + logZψ,θ. (24)
Plug Eq. 24 into Eq. 23, we have ∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∫ ∇θ′pθ′(x) [Eψ(x) + logZψ,θ] dx
= ∇θ′Ex∼pθ′ (x) [Eψ(x)] , (25)
since ∫ ∇θ′pθ′(x) logZψ,θdx = ∇θ′ logZψ,θ ∫ pθ′(x)dx = ∇θ′ logZψ,θ = 0.
C.2 RESULTS
We train VAEBM with an additional loss term that updates the parameter θ to minimize DKL(pθ(x)||hψ,θ(x)) as explained above. Our experiment uses the same initial VAE as in Sec. 5.2, and details of the implementation are introduced in Appendix F. We obtain FID 14.0 and IS 8.05, which is similar to the results of plain VAEBM (FID 12.96 and IS 8.15). Therefore, we conclude that training the model by minimizingDKL(pθ(x)||hψ,θ(x)) does not improve the performance, and updating the decoder is not necessary. This is likely because the initial VAE is pulled as closely as possible to the data distribution already, which is also the target for the joint VAEBM hψ,θ(x).
D COMPARING LIKELIHOODS ON TRAIN AND TEST SET
In Figure 6, we plot a histogram of unnormalized log-likelihoods of 10k CIFAR-10 train set and test set images. We see that our model assigns similar likelihoods to both train and test set images. This indicates that VAEBM generalizes well to unseen data and covers modes in the training data well.
E IMPLEMENTATION DETAILS
In this section, we introduce the details of training and sampling from VAEBM.
NVAE: VAEBM uses NVAE as the pθ(x) component in the model. We train the NVAE with its official implementation3. We largely follow the default settings, with one major difference that we use a Gaussian decoder instead of a discrete logistic mixture decoder as in Vahdat & Kautz (2020). The reason for this is that we can run Langevin dynamics only with continuous variables. The number of latent variable groups for CIFAR-10, CelebA 64, LSUN Church 64 and CelebA HQ 256 are 30, 15, 15 and 20, respectively.
Network for energy function: We largely adopt the energy network structure for CIFAR-10 in Du & Mordatch (2019), and we increase the depth of the network for larger images. There are 2 major differences between our energy networks and the ones used in Du & Mordatch (2019): 1. we replace the LeakyReLU activations with Swish activations, as we found it improves training stability, and 2. we do not use spectral normalization (Miyato et al., 2018); instead, we use weight normalization with data-dependent initialization (Salimans & Kingma, 2016). The network structure for each dataset is presented in Table 7.
Training of energy function: We train the energy function by minimizing the negative log likelihood and an additional spectral regularization loss which penalizes the spectral norm of each convolutional layer in Eψ . The spectral regularization loss is also used in training NVAE, as we found
3https://github.com/NVlabs/NVAE
it helpful to regularize the sharpness of the energy network and better stabilize training. We use a coefficient 0.2 for the spectral regularization loss.
We summarize some key hyper-parameters we used to train VAEBM in Table 8.
On all datasets, we train VAEBM using the Adam optimizer (Kingma & Ba, 2015) and weight decay 3e−5. We use constant learning rates, shown in Table 8. Following Du & Mordatch (2019), we clip training gradients that are more than 3 standard deviations from the 2nd-order Adam parameters.
While persistent sampling using a sample replay buffer has little effect on CIFAR-10, we found it to be useful on large images such as CelebA HQ 256. When we do not use persistent sampling, we always initialize the LD chains with ( x, z), sampled from a standard Gaussian. When we use persistent sampling in training, we keep a sample replay buffer that only stores samples of z, while x is always initialized from a standard Gaussian. The size of the replay buffer is 10,000 for CIFAR10 and LSUN Church 64, and 8,000 for CelebA HQ 256. At every training iteration, we initialize the MCMC chains on z by drawing z from the replay buffer with probability p and from standard Gaussian with probability 1− p. For CIFAR-10 and LSUN Church 64, we linearly increase p from 0 to 0.6 in 5,000 training iterations, and for CelebA HQ 256, we linearly increase p from 0 to 0.6 in 3,000 training iterations. The settings of Langevin dynamics are presented in Table 8.
We do not explicitly set the number of training iterations. Instead, we follow Du & Mordatch (2019) to train the energy network until we cannot generate realistic samples anymore. This happens when the model overfits the training data and hence energies of negative samples are much larger than energies of training data. Typically, training takes around 25,000 iterations (or 16 epochs) on CIFAR-10, 20,000 iterations (or 3 epochs) on CelebA 64, 20,000 iterations (or 5 epochs) on LSUN Church 64, and 9,000 iterations (or 5 epochs) on CelebA HQ 256.
Test time sampling: After training the model, we generate samples for evaluation by running Langvin dynamics with ( x, z) initialized from standard Gaussian, regardless of whether persistent sampling is used in training or not. We run slightly longer LD chains than training to obtain the best sample quality. In particular, our reported values are obtained from running 16 steps of LD for CIFAR-10, 20 steps of LD for CelebA64 and LSUN Church 64, and 24 steps for CelebA HQ 256. The step sizes are the same as training step sizes.
In CelebA HQ 256 dataset, we optionally use low temperature initialization for better visual quality. To do this, we first draw samples from the VAE with low temperature and readjusted the BN statistics as introduced by Vahdat & Kautz (2020), and then initialize the MCMC chain by ( x, z) obtained by encoding the low-temperature samples using VAE’s encoder without readjusted BN statistics.
Evaluation metrics: We use the official implementations of FID4 and IS5. We compute IS using 50k CIFAR 10 samples, and we compute FID between 50k generated samples and training images, except for CelebA HQ 256 where we use 30k training images (the CelebA HQ dataset contains only 30k samples).
F SETTINGS FOR ABLATION STUDY
In this section, we present the details of ablation experiments in Sec. 5.2. Throughout ablation experiments, we use a smaller NVAE with 20 groups of latent variables trained on CIFAR-10. We use the same network architectures for the energy network as in Table 7, with potentially different
4https://github.com/bioinf-jku/TTUR 5https://github.com/openai/improved-gan/tree/master/inception_score
normalization techniques discussed below. We spent significant efforts on improving each method we compare against, and we report the settings that led to the best results.
WGAN initialized with NVAE decoder: We initialize the generator with the pre-trained NVAE decoder, and the discriminator is initialized by a CIFAR-10 energy network with random weights. We use spectral normalization and batch normalization in the discriminator as we found them necessary for convergence. We update the discriminator using the Adam optimizer with constant learning rate 5e−5, and update the generator using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. We train the generator and discriminator for 40k iterations, and we reach convergence of sample quality towards the end of training.
EBM on x, w/ or w/o initializing MCMC with NVAE samples: We train two EBMs on data space similar to Du & Mordatch (2019), where for one of them, we use the pre-trained NVAE to initialize the MCMC chains that draw samples during training. The setting for training these two EBMs are the same except for the initialization of MCMC. We use spectral normalization in the energy network and energy regularization in the training objective as done in Du & Mordatch (2019) because we found these modifications to improve performance. We train the energy function using the Adam optimizer with constant learning rate 1e−4. We train for 100k iterations, and we reach convergence of sample quality towards the end of training. During training, we draw samples from the model following the MCMC settings in Du & Mordatch (2019). In particular, we use persistent sampling and sample from the sample replay buffer with probability 0.95. We run 60 steps of Langevin dynamics to generate negative samples and we clip gradients to have individual value magnitudes of less than 0.01. We use a step size of 10 for each step of Langevin dynamics. For test time sampling, we generate samples by running 150 steps of LD with the same settings as during training.
VAEBM withDKL(pθ(x)||hψ,θ(x)) loss: We use the same network structure forEψ as in VAEBM. We find persistent sampling significantly hurts the performance in this case, possibly due to the fact that the decoder is updated and hence the initial samples from the decoder change throughout training. Therefore, we do not use persistent training. We train the energy function using the Adam optimizer with constant learning rate 5e−5. We draw negative samples by running 10 steps of LD with step size 8e−5. We update the decoder with the gradient in Eq. 21 using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. For test time sampling, we run 15 steps of LD with step size 5e−6. VAEBM: The training of VAEBM in this section largely follows the settings described in Appendix E. We use the same energy network as for CIFAR-10, and we train using the Adam optimizer with constant learning rate 5e−5. Again, we found that the performance of VAEBM with or without persistent sampling is similar. We adopt persistent sampling in this section because it is faster. The setting for the buffer is the same as in Appendix E. We run 5 steps of LD with step size 8e−5 during training, and 15 steps of LD with the same step size in testing.
G QUALITATIVE RESULTS OF ABLATION STUDY
In Figure 7, we show qualitative samples from models corresponding to each item in Table 4, as well as samples generated by VAEBM with additional DKL(pθ(x)||hψ,θ(x)) loss.
H ADDITIONAL QUALITATIVE RESULTS
We present additional qualitative results in this section.
Additional samples and visualizations of MCMC on CIFAR-10 are in Figures 8 and 9, respectively.
Additional samples on CelebA 64 are in Figure 10.
Additional samples on LSUN Church 64 are in Figure 11. We visualize the effect of running MCMC by displaying sample pairs before and after MCMC in Figure 12.
Additional samples on CelebA HQ 256 generated by initializing VAE samples with temperature 0.7 are shown in Figure 13. Samples generated by initializing VAE samples with full temperature 1.0 are shown in Figure 14. We visualize the effect of running MCMC by displaying sample pairs
before and after MCMC in Figure 15. Note that the samples used to visualize MCMC are generated by initializing MCMC chains with VAE samples with full temperature 1.0.
I NEAREST NEIGHBORS
We show nearest neighbors in the training set with generated samples on CIFAR-10 (in Figure 16 and 17) and CelebA HQ 256 (in Figure 18 and 19). We observe that the nearest neighbors are significantly different from the samples, suggesting that our models generalize well.
J SETTINGS OF SAMPLING SPEED EXPERIMENT
We use the official implementation and checkpoints of NCSN at https://github.com/ ermongroup/ncsn. We run the experiments on a computer with a Titan RTX GPU. We use PyTorch 1.5.0 and CUDA 10.2. | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths of the proposed approach, particularly in terms of its ability to sample from the latent space?
3. Are there any limitations or potential drawbacks to the method, such as the correction term only being applied to the image space?
4. How do the long-run MCMC sampling chains behave, and how do they compare to sampling in the original (x, z) space?
5. Are there any signs of mode collapse in the synthesized results, and how might this be addressed? | Review | Review
This paper proposes a model that corrects VAE by an energy-based model defined on image space. The model is learned in two phase. The first phase learns the VAE model, while the second phase learns the EBM correction term by MLE. Experimental results show that the proposed method outperforms pure EBM defined on image space and also pure VAE models by large margins.
pros: the paper is clear written and easy to follow. The ablation study shows clearly the advantage over baseline methods. Sampling from EBM on image space is hard. With VAE as a backbone, the sampling can be transferred to the latent space and the residual \epsilon in the image space, which is much more friendly to MCMC sampling.
cons:
The energy term is used to correct only on image space. Would be interesting to see if VAE can be corrected by a latent EBM where the energy function is defined on (x, z).
After learning, would long-run MCMC sampling chain remain stable and mix well? It would be interesting to diagnose the long run chain behavior, and compare the difference of sampling in the space (\epsilon_x, \epsilon_z) and (x, z).
For the synthesized results of CIFAR-10, it seems that some patterns appear repeatedly (e.g., the white dog face). Is the model suffered from mode collapsing problem?
Overall, it is a good submission that proposes a principled method to combine VAE and EBM and demonstrates strong empirical results. I tend to accept this paper. |
ICLR | Title
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Abstract
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE’s latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256×256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.
1 INTRODUCTION
Deep generative learning is a central problem in machine learning. It has found diverse applications, ranging from image (Brock et al., 2018; Karras et al., 2019; Razavi et al., 2019), music (Dhariwal et al., 2020) and speech (Ping et al., 2020; Oord et al., 2016a) generation, distribution alignment across domains (Zhu et al., 2017; Liu et al., 2017; Tzeng et al., 2017) and semi-supervised learning (Kingma et al., 2014; Izmailov et al., 2020) to 3D point cloud generation (Yang et al., 2019), light-transport simulation (Müller et al., 2019), molecular modeling (Sanchez-Lengeling & AspuruGuzik, 2018; Noé et al., 2019) and equivariant sampling in theoretical physics (Kanwar et al., 2020).
Among competing frameworks, likelihood-based models include variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016), autoregressive models (Oord et al., 2016b), and energy-based models (EBMs) (Lecun et al., 2006; Salakhutdinov et al., 2007). These models are trained by maximizing the data likelihood under the model, and unlike generative adversarial networks (GANs) (Goodfellow et al., 2014), their training is usually stable and they cover modes in data more faithfully by construction.
Among likelihood-based models, EBMs model the unnormalized data density by assigning low energy to high-probability regions in the data space (Xie et al., 2016; Du & Mordatch, 2019). EBMs are appealing because they require almost no restrictions on network architectures (unlike normalizing flows) and are therefore potentially very expressive. They also exhibit better robustness and out-of-distribution generalization (Du & Mordatch, 2019) because, during training, areas with high probability under the model but low probability under the data distribution are penalized explicitly. However, training and sampling EBMs usually requires MCMC, which can suffer from slow mode mixing and is computationally expensive when neural networks represent the energy function.
∗Work done during an internship at NVIDIA
On the other hand, VAEs are computationally more efficient for sampling than EBMs, as they do not require running expensive MCMC steps. VAEs also do not suffer from expressivity limitations that normalizing flows face (Dupont et al., 2019; Kong & Chaudhuri, 2020), and in fact, they have recently shown state-of-the-art generative results among non-autoregressive likelihood-based models (Vahdat & Kautz, 2020). Moreover, VAEs naturally come with a latent embedding of data that allows fast traverse of the data manifold by moving in the latent space and mapping the movements to the data space. However, VAEs tend to assign high probability to regions with low density under the data distribution. This often results in blurry or corrupted samples generated by VAEs. This also explains why VAEs often fail at out-of-distribution detection (Nalisnick et al., 2019).
In this paper, we propose a novel generative model as a symbiotic composition of a VAE and an EBM (VAEBM) that combines the best of both. VAEBM defines the generative distribution as the product of a VAE generator and an EBM component defined in pixel space. Intuitively, the VAE captures the majority of the mode structure in the data distribution. However, it may still generate samples from low-probability regions in the data space. Thus, the energy function focuses on refining the details and reducing the likelihood of non-data-like regions, which leads to significantly improved samples.
Moreover, we show that training VAEBM by maximizing the data likelihood easily decomposes into training the VAE and the EBM component separately. The VAE is trained using the reparameterization trick, while the EBM component requires sampling from the joint energy-based model during training. We show that we can sidestep the difficulties of sampling from VAEBM, by reparametrizing the MCMC updates using VAE’s latent variables. This allows MCMC chains to quickly traverse the model distribution and it speeds up mixing. As a result, we only need to run short chains to obtain approximate samples from the model, accelerating both training and sampling at test time.
Experimental results show that our model outperforms previous EBMs and state-of-the-art VAEs on image generation benchmarks including CIFAR-10, CelebA 64, LSUN Church 64, and CelebA HQ 256 by a large margin, reducing the gap with GANs. We also show that our model covers the modes in the data distribution faithfully, while having less spurious modes for out-of-distribution data. To the best of knowledge, VAEBM is the first successful EBM applied to large images.
In summary, this paper makes the following contributions: i) We propose a new generative model using the product of a VAE generator and an EBM defined in the data space. ii) We show how training this model can be decomposed into training the VAE first, and then training the EBM component. iii) We show how MCMC sampling from VAEBM can be pushed to the VAE’s latent space, accelerating sampling. iv) We demonstrate state-of-the-art image synthesis quality among likelihood-based models, confirm complete mode coverage, and show strong out-of-distribution detection performance.
2 BACKGROUND
Energy-based Models: An EBM assumes pψ(x) to be a Gibbs distribution of the form pψ(x) = exp (−Eψ(x)) /Zψ , where Eψ(x) is the energy function with parameters ψ and Zψ =∫ x exp (−Eψ(x)) dx is the normalization constant. There is no restriction on the particular form of Eψ(x). Given a set of samples drawn from the data distribution pd(x), the goal of maximum likelihood learning is to maximize the log-likelihood L(ψ) = Ex∼pd(x) [log pψ(x)], which has the derivative (Woodford, 2006):
∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼pψ(x) [∂ψEψ (x)] (1)
For the first expectation, the positive phase, samples are drawn from the data distribution pd(x), and for the second expectation, the negative phase, samples are drawn from the model pψ(x) itself. However, sampling from pψ(x) in the negative phase is itself intractable and approximate samples are usually drawn using MCMC. A commonly used MCMC algorithm is Langevin dynamics (LD) (Neal, 1993). Given an initial sample x0, Langevin dynamics iteratively updates it as:
xt+1 = xt − η
2 ∇xEψ(xt) +
√ ηωt, ωt ∼ N (0, I), (2)
where η is the step-size.1 In practice, Eq. 2 is run for finite iterations, which yields a Markov chain with an invariant distribution approximately close to the original target distribution.
1In principle one would require an accept/reject step to make it a rigorous MCMC algorithm, but for sufficiently small stepsizes this is not necessary in practice (Neal, 1993).
Variational Autoencoders: VAEs define a generative model of the form pθ(x, z) = pθ(z)pθ(x|z), where z is the latent variable with prior pθ(z), and pθ(x|z) is a conditional distribution that models the likelihood of data x given z. The goal of training is to maximize the marginal log-likelihood log pθ(x) given a set of training examples. However since the marginalization is intractable, instead, the variational lower bound on log pθ(x) is maximized with qφ(z|x) as the approximate posterior:
log pθ(x) ≥ Ez∼qφ(z|x) [log pθ(x|z)]−DKL [qφ(z|x)‖pθ(z)] := Lvae(x, θ, φ). (3)
The state-of-the-art VAE, NVAE (Vahdat & Kautz, 2020), increases the expressivity of both prior and approximate posterior using hierarchical latent variables (Kingma et al., 2016) where z is decomposed into a set of disjoint groups, z = {z1, z1, . . . , zL}, and the prior pθ(z) = ∏ l pθ(zl|z<l)
and the approximate posterior qφ(z|x) = ∏ l qφ(zl|z<l,x) are defined using autoregressive distributions over the groups. We refer readers to Vahdat & Kautz (2020) for more details.
3 ENERGY-BASED VARIATIONAL AUTOENCODERS
One of the main problems of VAEs is that they tend to assign high probability to regions in data space that have low probability under the data distribution. To tackle this issue, we propose VAEBM, a generative model constructed by the product of a VAE generator and an EBM component defined in the data space. This formulation allows our model to capture the main mode structure of the data distribution using the VAE. But when training the joint VAEBM, in the negative training phase we sample from the model itself and can discover non-data-like samples, whose likelihood is then reduced by the energy function explicitly. The energy function defined in the pixel space also shares similarities with discriminator in GANs, which can generate crisp and detailed images.
Formally, we define the generative model in VAEBM as hψ,θ(x, z) = 1Zψ,θ pθ(x, z)e −Eψ(x) where pθ(x, z) = pθ(z)pθ(x|z) is a VAE generator and Eψ(x) is a neural network-based energy function, operating only in the x space, and Zψ,θ = ∫ pθ(x)e
−Eψ(x)dx is the normalization constant. VAEBM is visualized in Fig. 1. Marginalizing out the latent variable z gives
hψ,θ(x) = 1
Zψ,θ
∫ pθ(x, z)e −Eψ(x)dz = 1
Zψ,θ pθ(x)e
−Eψ(x). (4)
Given a training dataset, the parameters of VAEBM, ψ, θ, are trained by maximizing the marginal log-likelihood on the training data:
log hψ,θ(x) = log pθ(x)− Eψ(x)− logZψ,θ (5) ≥ Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)||p(z))︸ ︷︷ ︸
Lvae(x,θ,φ)
−Eψ(x)− logZψ,θ︸ ︷︷ ︸ LEBM(x,ψ,θ) , (6)
where we replace log pθ(x) with the variational lower bound from Eq. 3. Eq. 6 forms the objective function for training VAEBM. The first term corresponds to the VAE objective and the second term corresponds to training the EBM component. Next, we discuss how we can optimize this objective.
3.1 TRAINING
The LEBM(x, ψ, θ) term in Eq. 6 is similar to the EBM training objective except that the log partition function depends on both ψ and θ. We show in Appendix A that logZψ,θ has the gradients ∂ψ logZψ,θ = Ex∼hψ,θ(x,z) [−∂ψEψ (x)] and ∂θ logZψ,θ = Ex∼hψ,θ(x,z) [∂θ log pθ(x)] . The first gradient can be estimated easily by evaluating the gradient of the energy function at samples drawn from the VAEBM model hψ,θ(x, z) using MCMC. However, the second term involves computing the intractable ∂∂θ log pθ(x). In Appendix A, we show that estimating ∂ ∂θ log pθ(x) requires sampling from the VAE’s posterior distribution, given model samples x ∼ hψ,θ(x, z). To avoid the computational complexity of estimating this term, for example with a second round of MCMC, we propose a two-stage algorithm for training VAEBM. In the first stage, we train the VAE model in our VAEBM by maximizing the Lvae(x, θ, φ) term in Eq. 6. This term is identical to the VAE’s objective, thus, the parameters θ and φ are trained using the reparameterized trick as in Sec. 2. In the second stage, we keep the VAE model fixed and only train the EBM component. Since θ is now fixed, we only require optimizing LEBM(x, ψ, θ) w.r.t. ψ, the parameters of the energy function. The gradient of L(ψ) = Ex∼pd [LEBM(x, ψ, θ)] w.r.t. ψ is: ∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼hψ,θ(x,z) [∂ψEψ (x)] , (7) which decomposes into a positive and a negative phase, as discussed in Sec. 2.
Reparametrized sampling in the negative phase: For gradient estimation in the negative phase, we can draw samples from the model using MCMC. Naively, we can perform ancestral sampling, first sampling from the prior pθ(z), then running MCMC for pθ(x|z)e−Eψ(x) in x-space. This is problematic, since pθ(x|z) is often sharp and MCMC cannot mix when the conditioning z is fixed. In this work, we instead run the MCMC iterations in the joint space of z and x. Furthermore, we accelerate the sampling procedure using reparametrization for both x and the latent variables z. Recall that when sampling from the VAE, we first sample z ∼ p(z) and then x ∼ pθ(x|z). This sampling scheme can be reparametrized by sampling from a fixed noise distribution (e.g., ( z, x) ∼ p = N (0, I)) and deterministic transformations Tθ such that
z = T zθ ( z), x = T x θ (z( z), x) = T x θ (T z θ ( z), x). (8)
Here, T zθ denotes the transformation defined by the prior that transforms noise z into prior samples z and Txθ represents the decoder that transforms noise x into samples x, given prior samples z. We can apply the same reparameterization when sampling from hψ,θ(x, z). This corresponds to sampling ( x, z) from the “base distribution”:
hψ,θ ( x, z) ∝ e−Eψ(T x θ (T z θ ( z), x))p ( x, z) , (9)
and then transforming them to x and z via Eq. 8 (see Appendix B for derivation). Note that z and x have the same scale, as p ( x, z) is a standard Normal distribution, while the scales of x and z can be very different. Thus, running MCMC sampling with this reparameterization in the ( x, z)space has the benefit that we do not need to tune the sampling scheme (e.g., step size in LD) for each variable. This is particularly helpful when z itself has multiple groups, as in our case.
The advantages of two-stage training: Besides avoiding the difficulties of estimating the full gradient of logZψ,θ, two-stage training has additional advantages. As we discussed above, updating ψ is computationally expensive, as each update requires an iterative MCMC procedure to draw samples from the model. The first stage of our training minimizes the distance between the VAE model and the data distribution, and in the second stage, the EBM further reduce the mismatch between the model and the data distribution. As the pre-trained VAE pθ(x) provides a good approximation to pd(x) already, we expect that a relatively small number of expensive updates for training ψ is needed. Moreover, the pre-trained VAE provides a latent space with an effectively lower dimensionality and a smoother distribution than the data distribution, which facilitates more efficient MCMC.
Alternative extensions: During the training of the energy function, we fix the VAE’s parameters. In Appendix C, we discuss a possible extension to our training objective that also updates the VAE.
4 RELATED WORK
Early variants of EBMs include models whose energy is defined over both data and auxiliary latent variables (Salakhutdinov & Hinton, 2009; Hinton, 2012), and models using only data variables (Hinton, 2002; Mnih & Hinton, 2005). Their energy functions are simple and they do not scale to high
dimensional data. Recently, it was shown that EBMs with deep neural networks as energy function can successfully model complex data such as natural images (Du & Mordatch, 2019; Nijkamp et al., 2019b;a). They are trained with maximum likelihood and only model the data variable. Joint EBMs (Grathwohl et al., 2020a; Liu & Abbeel, 2020) model the joint distribution of data and labels. In contrast, our VAEBM models the joint distribution of data and general latent variables.
Besides fundamental maximum likelihood training, other techniques to train EBMs exist, such as minimizing F-divergence (Yu et al., 2020a) or Stein discrepancy (Grathwohl et al., 2020b), contrastive estimation (Gutmann & Hyvärinen, 2010; Gao et al., 2020) and denoising score matching (Li et al., 2019). Recently, noise contrastive score networks and diffusion models have demonstrated high quality image synthesis (Song & Ermon, 2019; 2020; Ho et al., 2020). These models are also based on denoising score matching (DSM) (Vincent, 2011), but do not parameterize any explicit energy function and instead directly model the vector-valued score function. We view score-based models as alternatives to EBMs trained with maximum likelihood. Although they do not require iterative MCMC during training, they need very long sampling chains to anneal the noise when sampling from the model (& 1000 steps). Therefore, sample generation is extremely slow.
VAEBM is an EBM with a VAE component, and it shares similarities with work that builds connections between EBMs and other generative models. Zhao et al. (2017); Che et al. (2020); Song et al. (2020); Arbel et al. (2020) formulate EBMs with GANs, and use the discriminator to assign an energy. Xiao et al. (2020); Nijkamp et al. (2020) use normalizing flows that transport complex data to latent variables to facilitate MCMC sampling (Hoffman et al., 2019), and thus, their methods can be viewed as EBMs with flow component. However, due to their topology-preserving nature, normalizing flows cannot easily transport complex multimodal data, and their sample quality on images is limited. A few previous works combine VAEs and EBMs in different ways from ours. Pang et al. (2020) and Vahdat et al. (2018b;a; 2020) use EBMs for the prior distribution, and (Han et al., 2020; 2019) jointly learn a VAE and an EBM with independent sets of parameters by an adversarial game.
Finally, as we propose two-stage training, our work is related to post training of VAEs. Previous work in this direction learns the latent structure of pre-trained VAEs (Dai & Wipf, 2019; Xiao et al., 2019; Ghosh et al., 2020), and sampling from learned latent distributions improves sample quality. These methods cannot easily be extended to VAEs with hierarchical latent variables, as it is difficult to fit the joint distribution of multiple groups of variables. Our purpose for two-stage training is fundamentally different: we post-train an energy function to refine the distribution in data space.
5 EXPERIMENTS
In this section, we evaluate our proposed VAEBM through comprehensive experiments. Specifically, we benchmark sample quality in Sec. 5.1, provide detailed ablation studies on training techniques in Sec. 5.2, and study mode coverage of our model and test for spurious modes in Sec. 5.3. We choose NVAE (Vahdat & Kautz, 2020) as our VAE, which we pre-train, and use a simple ResNet as energy function Eψ , similar to Du & Mordatch (2019). We draw approximate samples both for training and testing by running short Langevin dynamics chains on the distribution in Eq. 9. Note that in NVAE, the prior distribution is a group-wise auto-regressive Gaussian, and the conditional pixel-wise distributions in x are also Gaussian. Therefore, the reparameterization corresponds to shift and scale transformations. For implementation details, please refer to Appendix E.
5.1 IMAGE GENERATION
In Table 1, we quantitatively compare the sample quality of VAEBM with different generative models on (unconditional) CIFAR-10. We adopt Inception Score (IS) (Salimans et al., 2016) and FID (Heusel et al., 2017) as quantitative metrics. Note that FID reflects the sample quality more faithfully, as potential problems have been reported for IS on CIFAR-10 (Barratt & Sharma, 2018).
We observe that our VAEBM outperforms previous EBMs and other explicit likelihood-based models by a large margin. Note that introducing persistent chains during training only leads to slight improvement, while Du & Mordatch (2019) rely on persistent chains with a sample replay buffer. This is likely due to the efficiency of sampling in latent space. Our model also produces significantly better samples than NVAE, the VAE component of our VAEBM, implying a significant impact of our proposed energy-based refinement. We also compare our model with state-of-the-art GANs and
recently proposed score-based models, and we obtain comparable or better results. Thus, we largely close the gap to GANs and score-models, while maintaining the desirable properties of models trained with maximum likelihood, such as fast sampling and better mode coverage.
Qualitative samples generated by our model are shown in Fig. 2a and intermediate samples along MCMC chains in Fig. 2b. We find that VAEBM generates good samples by running only a few MCMC steps. Initializing MCMC chains from the pre-trained VAE also helps quick equilibration.
We also train VAEBM on larger images, including CelebA 64, CelebA HQ 256 (Liu et al., 2015) and LSUN Church 64 (Yu et al., 2015). We report the FID scores for CelebA 64 and CelebA HQ 256 in Tables 2 and 3. On CelebA 64, our model obtains results comparable with the best GANs. Although our model obtains worse results than some advanced GANs on CelebA HQ 256, we significantly
reduce the gap between likelihood based models and GANs on this dataset. On LSUN Church 64, we obtain FID 13.51, which significantly improves the NVAE baseline FID 41.3. We show qualitative samples in Fig. 3. Appendix H contains additional samples and MCMC visualizations.
Our model can produce impressive samples by running very short MCMC chains, however, we find that when we run longer MCMC chains than training chains, most chains stay around the local mode without traversing between modes. We believe that the non-mixing is due to the long mixing time of Langevin Dynamics Neal et al. (2011), as Nijkamp et al. (2019b;a) also observe that models trained with short-run MCMC have non-mixing long-run chains. We conjecture that mixing can be improved by training and sampling with more advanced MCMC techniques that are known to mix faster, such as HMC Neal et al. (2011), and this will be left for future work.
Table 4: Comparison for IS and FID on CIFAR10 between several related training methods.
Model IS↑ FID↓ NVAE (Vahdat & Kautz) 5.19 55.97 EBM on x (Du & Mordatch) 5.85 48.89 EBM on x, MCMC init w/ NVAE 7.28 29.32 WGAN w/ NVAE decoder 7.41 20.39 VAEBM (ours) 8.15 12.96
Table 5: Mode coverage on StackedMNIST.
Model Modes↑ KL↓ VEEGAN (Srivastava et al.) 761.8 2.173 PacGAN (Lin et al.) 992.0 0.277 PresGAN (Dieng et al.) 999.6 0.115 InclusiveGAN (Yu et al.) 997 0.200 StyleGAN2 (Karras et al.) 940 0.424 VAEBM (ours) 1000 0.087
5.2 ABLATION STUDIES
In Table 4, we compare VAEBM to several closely related baselines. All the experiments here are performed on CIFAR-10, and for simplicity, we use smaller models than those used in Table 1. Appendix F summarizes the experimental settings and Appendix G provides qualitative samples.
Data space vs. augmented space: One key difference between VAEBM and previous work such as Du & Mordatch (2019) is that our model is defined on the augmented space (x, z), while their EBM only involves x. Since we pre-train the VAE, one natural question is whether our strong results are due to good initial samples x from the VAE, which are used to launch the MCMC chains. To address this, we train an EBM purely on x as done in Du & Mordatch (2019). We also train another EBM only on x, but we initialize the MCMC chains with samples from the pre-trained NVAE instead of noise. As shown in line 3 of Table 4, this initialization helps the EBM which is defined only on x. However, VAEBM in the augmented space outperforms the EBMs on x only by a large margin.
Adversarial training vs. sampling: The gradient for ψ in Eq. 7 is similar to the gradient updates of WGAN’s discriminator (Arjovsky et al., 2017). The key difference is that we draw (approximate) samples from hψ(x) by MCMC, while WGAN draws negative samples from a generator (Che et al., 2020). WGAN updates the generator by playing an adversarial game, while we only update the energy function Eψ . We compare these two methods by training ψ and θ with the WGAN objective and initializing θ with the NVAE decoder. As shown in line 4 of Table 4, we significantly outperform the WGAN version of our model, implying the advantage of our method over adversarial training.
5.3 TEST FOR SPURIOUS OR MISSING MODES
We evaluate mode coverage on StackedMNIST. This dataset contains images generated by randomly choosing 3 MNIST images and stacking them along the RGB channels. Hence, the data distribution has 1000 modes. Following Lin et al. (2018), we report the number of covered modes and the KL divergence from the categorical distribution over 1000 categories from generated samples to true data (Table 5). VAEBM covers all modes and achieves the lowest KL divergence even compared to GANs that are specifically designed for this task. Hence, our model covers the modes more equally. We also plot the histogram of likelihoods for CIFAR-10 train/test images (Fig. 6, Appendix D) and present nearest neighbors of generated samples (Appendix I). We conclude that we do not overfit.
We evaluate spurious modes in our model by assessing its performance on out-of-distribution (OOD) detection. Specifically, we use VAEBM trained on CIFAR-10, and estimate unnormalized log hψ,θ(x) on in-distribution samples (from CIFAR-10 test set) and OOD samples from several datasets. Following Nalisnick et al. (2019), we use area under the ROC curve (AUROC) as quantitative metric, where high AUROC indicates that the model correctly assigns low density to OOD samples. In Table 6, we see that VAEBM has significantly higher AUROC than NVAE, justifying our argument that the energy function reduces the likelihood of non-data-like regions. VAEBM also performs better than IGEBM and JEM, while worse than HDGE. However, we note that JEM and HDGE are classifier-based models, known to be better for OOD detection (Liang et al., 2018).
5.4 EXACT LIKELIHOOD ESTIMATE ON 2D TOY DATA
VAEBM is an explicit likelihood model with a parameterized density function. However, like other energy-based models, the estimation of the exact likelihood is difficult due to the intractable partition
function logZ. One possible way to estimate the partition function is to use Annealed Importance Sampling (AIS) (Neal, 2001). However, using AIS to estimate logZ in high-dimensional spaces is difficult. In fact, Du & Mordatch (2019) report that the estimation does not converge in 2 days on CIFAR-10. Furthermore, AIS gives a stochastic lower bound on logZ, and therefore the likelihood computed with this estimated logZ would be an upper bound for the true likelihood. This makes the estimated likelihood hard to compare with the VAE’s likelihood estimate, which is usually a lower bound on the true likelihood (Burda et al., 2015).
As a result, to illustrate that our model corrects the distribution learned by the VAE and improves the test likelihood, we conduct additional experiments on a 2-D toy dataset. We use the 25-Gaussians dataset, which is generated by a mixture of 25 two-dimensional isotropic Gaussian distributions arranged in a grid. This dataset is also studied in Che et al. (2020). The encoder and decoder of the VAE have 4 fully connected layers with 256 hidden units, and the dimension of the latent variables is 20. Our energy function has 4 fully connected layers with 256 hidden units.
In the 2-D domain, the partition function logZ can be accurately estimated by a numerical integration scheme. For the VAE, we use the IWAE bound (Burda et al., 2015) with 10,000 posterior samples to estimate its likelihood. We use 100,000 test samples from the true distribution to evaluate the likelihood. Our VAEBM obtains the average log likelihood of -1.50 nats on test samples, which significantly improves the VAE, whose average test likelihood is -2.97 nats. As a reference, we also analytically compute the log likelihood of test samples under the true distribution, and the result is -1.10 nats.
We show samples from the true distribution, VAE and VAEBM in Figure 4. We observe that the VAEBM successfully corrects the distribution learned by the VAE and has better sample quality.
5.5 SAMPLING EFFICIENCY
Despite their impressive sample quality, denoising score matching models (Song & Ermon, 2019; Ho et al., 2020) are slow at sampling, often requiring & 1000 MCMC steps. Since VAEBM uses short MCMC chains, it takes only 8.79 seconds to generate 50 CIFAR-10 samples, whereas NCSN (Song & Ermon, 2019) takes 107.9 seconds, which is about 12× slower (see Appendix J for details).
6 CONCLUSIONS
We propose VAEBM, an energy-based generative model in which the data distribution is defined jointly by a VAE and an energy network, the EBM component of the model. In this joint model, the EBM and the VAE form a symbiotic relationship: the EBM component refines the initial VAEdefined distribution, while the VAE’s latent embedding space is used to accelerate sampling from the joint model and therefore enables efficient training of the energy function. We show that our model can be trained effectively in two stages with a maximum likelihood objective and we can efficiently sample it by running short Langevin dynamics chains. Experimental results demonstrate strong generative performance on several image datasets. Future work includes further scaling up the model to larger images, applying it to other domains, and using more advanced sampling algorithms.
B REPARAMETRIZATION FOR EBM
Suppose we draw the re-parametrization variables ( x, z) ∼ p ( x, z). For convenience, we denote Tθ( x, z) = (T x θ (T z θ ( z), x), T z θ ( z)) = (x, z). (11)
Since Tθ is a deterministic and invertible transformation that maps ( x, z) to (x, z), by the change of variables formula, we can write
pθ(x, z) = p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ , (12) where JT−1θ is the Jacobian of T −1 θ . Consider a Gaussian distribution as a simple example: if z ∼ N (µz, σz) and x|z ∼ N (µx(z), σx(z)), then z = T zθ ( z) = µz + σz · z, x = Txθ ( x, z) = µx(z) + σx(z) · x,
and
JT−1θ (x, z) = [σx(z)
−1, σ−1z ].
2Maximizing ELBO with respect to φ corresponds to minimizing DKL(qφ(z|x)||pθ(z|x)) while θ is fixed.
Recall that the generative model of our EBM is
hψ,θ(x, z) = e−Eψ(x)pθ(x, z)
Zψ,θ . (13)
We can apply the change of variable to hψ,θ(x, z) in similar manner:
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( x, z))| , (14)
where JTθ is the Jacobian of Tθ.
Since we have the relation
Jf−1 ◦ f = J−1f (15)
for invertible function f , we have that
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( z, x))| (16)
= 1
Zψ,θ e−Eψ(Tθ( x, z))pθ(Tθ( x, z)) |det (JTθ ( x, z))| (17)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ ∣∣∣ det (JTθ ( x, z)) ∣∣∣ (18) = 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) (19)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p ( x, z), (20)
which is the distribution in Eq. 9.
After we obtained samples ( x, z) from the distribution in Eq. 20, we obtain (x, z) by applying the transformation Tθ in Eq. 11.
B.1 COMPARISON OF SAMPLING IN ( x, z)-SPACE AND IN (x, z)-SPACE
Above we show that sampling from hψ,θ(x, z) is equivalent to sampling from hψ,θ( x, z) and applying the appropriate variable transformation. Here, we further analyze the connections between sampling from these two distributions with Langevin dynamics. Since each component of x and z can be re-parametrzied with scaling and translation of standard Gaussian noise, without loss of generality, we assume a variable c (c can be a single latent variable in z or a single pixel in x) and write
c = µ+ σ .
Suppose we sample in the space with energy function f on c and step size η. The update for is
t+1 = t − η
2 ∇ f +
√ ηωt, ωt ∼ N (0, I).
Now we plug t+1 into the expression of c while noting that∇ f = σ∇cf . We obtain ct+1 = µ+ σ t+1 = µ+ σ ( t − η
2 ∇ f +
√ ηωt ) = µ+ σ t − σ2η
2 ∇cf +
√ ησ2ωt
= ct − σ2η
2 ∇cf +
√ ησ2ωt.
Therefore, we see that running Langevin dynamics in ( x, z)-space is equivalent to running Langevin dynamics in (x, z)-space with step size for each component of z and x adjusted by its variance. However, considering the high dimensionality of x and z, the step size adjustment is difficult to implement.
The analysis above only considers a variable individually. More importantly, our latent variable z in the prior follows block-wise auto-regressive Gaussian distributions, so the variance of each
component in zi depends on the value of z<i. We foresee that because of this dependency, using a fixed step size per component of z will not be effective, even when it is set differently for each component. In contrast, all the components in ( x, z)-space have a unit variance. Hence, a universal step size for all the variables in this space can be used.
To further provide empirical evidence that adjusting the step size for each variable is necessary, we try sampling directly in (x, z)-space without adjusting the step size (i.e., use a universal step size for all variables). Qualitative results are presented in Figure 5. We examine several choices for the step size and we cannot obtain high-quality samples.
In conclusion, the re-parameterization provides an easy implementation to adjust step size for each variable, and the adjustment is shown to be crucial to obtain good samples.
C EXTENSION TO TRAINING OBJECTIVE
In the first stage of training VAEBM, the VAE model is trained by maximizing the training data log-likelihood which corresponds to minimizing an upper bound on DKL(pd(x)||pθ(x)) w.r.t. θ. In the second stage, when we are training the EBM component, we use the VAE model to sample from the joint VAEBM by running the MCMC updates in the joint space of z and x. Ideally, we may want to bring pθ(x) closer to hψ,θ(x) in the second stage, because when pθ(x) = hψ,θ(x), we will not need the expensive updates for ψ. We can bring pθ(x) closer to hψ,θ(x) by minimizing DKL(pθ(x)||hψ,θ(x)) with respect to θ which was recently discussed in the context of an EBMinterpretation of GANs by Che et al. (2020). To do so, we assume the target distribution hψ,θ(x) is fixed and create a copy of θ, named θ′, and we update θ′ by the gradient:
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′Ex∼pθ′ (x) [Eψ(x)] (21)
In other words, one update step for θ′ that minimizes DKL(p′θ(x)||hψ,θ(x)) w.r.t. θ′ can be easily done by drawing samples from p′θ(x) and minimizing the energy-function w.r.t. θ
′. Note that this approach is similar to the generator update in training Wasserstein GANs (Arjovsky et al., 2017). The above KL objective will encourage pθ(x) to model dominants modes in hψ,θ(x). However, it may cause pθ(x) to drop modes.
C.1 DERIVATION
Our derivation largely follows Appendix A.2 of Che et al. (2020). Note that every time we update θ, we are actually taking the gradient w.r.t θ′, which can be viewed as a copy of θ and is initialized as θ. In particular, we should note that the θ in hψ,θ(x) is fixed. Therefore, we have
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′ ∫ pθ′(x) [log pθ′(x)− log hψ,θ(x)] dx
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx
+ ∫ pθ′(x) [∇θ′ log pθ′(x)−∇θ′ log hψ,θ(x)] dx︸ ︷︷ ︸
=0
(22)
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx, (23)
where the second term in Eq. 22 is 0 because the log hψ,θ(x) does not depend on θ′ and the expectation of the score function is 0:∫
pθ′(x)∇θ′ log pθ′(x)dx = Ex∼pθ′ (x) [∇θ′ log pθ′(x)] = 0.
Recall that θ′ has the same value as θ before the update, so log pθ′(x)− log hψ,θ(x) = log [ pθ′(x)
pθ(x)e−Eψ(x)
] + logZψ,θ
= Eψ(x) + logZψ,θ. (24)
Plug Eq. 24 into Eq. 23, we have ∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∫ ∇θ′pθ′(x) [Eψ(x) + logZψ,θ] dx
= ∇θ′Ex∼pθ′ (x) [Eψ(x)] , (25)
since ∫ ∇θ′pθ′(x) logZψ,θdx = ∇θ′ logZψ,θ ∫ pθ′(x)dx = ∇θ′ logZψ,θ = 0.
C.2 RESULTS
We train VAEBM with an additional loss term that updates the parameter θ to minimize DKL(pθ(x)||hψ,θ(x)) as explained above. Our experiment uses the same initial VAE as in Sec. 5.2, and details of the implementation are introduced in Appendix F. We obtain FID 14.0 and IS 8.05, which is similar to the results of plain VAEBM (FID 12.96 and IS 8.15). Therefore, we conclude that training the model by minimizingDKL(pθ(x)||hψ,θ(x)) does not improve the performance, and updating the decoder is not necessary. This is likely because the initial VAE is pulled as closely as possible to the data distribution already, which is also the target for the joint VAEBM hψ,θ(x).
D COMPARING LIKELIHOODS ON TRAIN AND TEST SET
In Figure 6, we plot a histogram of unnormalized log-likelihoods of 10k CIFAR-10 train set and test set images. We see that our model assigns similar likelihoods to both train and test set images. This indicates that VAEBM generalizes well to unseen data and covers modes in the training data well.
E IMPLEMENTATION DETAILS
In this section, we introduce the details of training and sampling from VAEBM.
NVAE: VAEBM uses NVAE as the pθ(x) component in the model. We train the NVAE with its official implementation3. We largely follow the default settings, with one major difference that we use a Gaussian decoder instead of a discrete logistic mixture decoder as in Vahdat & Kautz (2020). The reason for this is that we can run Langevin dynamics only with continuous variables. The number of latent variable groups for CIFAR-10, CelebA 64, LSUN Church 64 and CelebA HQ 256 are 30, 15, 15 and 20, respectively.
Network for energy function: We largely adopt the energy network structure for CIFAR-10 in Du & Mordatch (2019), and we increase the depth of the network for larger images. There are 2 major differences between our energy networks and the ones used in Du & Mordatch (2019): 1. we replace the LeakyReLU activations with Swish activations, as we found it improves training stability, and 2. we do not use spectral normalization (Miyato et al., 2018); instead, we use weight normalization with data-dependent initialization (Salimans & Kingma, 2016). The network structure for each dataset is presented in Table 7.
Training of energy function: We train the energy function by minimizing the negative log likelihood and an additional spectral regularization loss which penalizes the spectral norm of each convolutional layer in Eψ . The spectral regularization loss is also used in training NVAE, as we found
3https://github.com/NVlabs/NVAE
it helpful to regularize the sharpness of the energy network and better stabilize training. We use a coefficient 0.2 for the spectral regularization loss.
We summarize some key hyper-parameters we used to train VAEBM in Table 8.
On all datasets, we train VAEBM using the Adam optimizer (Kingma & Ba, 2015) and weight decay 3e−5. We use constant learning rates, shown in Table 8. Following Du & Mordatch (2019), we clip training gradients that are more than 3 standard deviations from the 2nd-order Adam parameters.
While persistent sampling using a sample replay buffer has little effect on CIFAR-10, we found it to be useful on large images such as CelebA HQ 256. When we do not use persistent sampling, we always initialize the LD chains with ( x, z), sampled from a standard Gaussian. When we use persistent sampling in training, we keep a sample replay buffer that only stores samples of z, while x is always initialized from a standard Gaussian. The size of the replay buffer is 10,000 for CIFAR10 and LSUN Church 64, and 8,000 for CelebA HQ 256. At every training iteration, we initialize the MCMC chains on z by drawing z from the replay buffer with probability p and from standard Gaussian with probability 1− p. For CIFAR-10 and LSUN Church 64, we linearly increase p from 0 to 0.6 in 5,000 training iterations, and for CelebA HQ 256, we linearly increase p from 0 to 0.6 in 3,000 training iterations. The settings of Langevin dynamics are presented in Table 8.
We do not explicitly set the number of training iterations. Instead, we follow Du & Mordatch (2019) to train the energy network until we cannot generate realistic samples anymore. This happens when the model overfits the training data and hence energies of negative samples are much larger than energies of training data. Typically, training takes around 25,000 iterations (or 16 epochs) on CIFAR-10, 20,000 iterations (or 3 epochs) on CelebA 64, 20,000 iterations (or 5 epochs) on LSUN Church 64, and 9,000 iterations (or 5 epochs) on CelebA HQ 256.
Test time sampling: After training the model, we generate samples for evaluation by running Langvin dynamics with ( x, z) initialized from standard Gaussian, regardless of whether persistent sampling is used in training or not. We run slightly longer LD chains than training to obtain the best sample quality. In particular, our reported values are obtained from running 16 steps of LD for CIFAR-10, 20 steps of LD for CelebA64 and LSUN Church 64, and 24 steps for CelebA HQ 256. The step sizes are the same as training step sizes.
In CelebA HQ 256 dataset, we optionally use low temperature initialization for better visual quality. To do this, we first draw samples from the VAE with low temperature and readjusted the BN statistics as introduced by Vahdat & Kautz (2020), and then initialize the MCMC chain by ( x, z) obtained by encoding the low-temperature samples using VAE’s encoder without readjusted BN statistics.
Evaluation metrics: We use the official implementations of FID4 and IS5. We compute IS using 50k CIFAR 10 samples, and we compute FID between 50k generated samples and training images, except for CelebA HQ 256 where we use 30k training images (the CelebA HQ dataset contains only 30k samples).
F SETTINGS FOR ABLATION STUDY
In this section, we present the details of ablation experiments in Sec. 5.2. Throughout ablation experiments, we use a smaller NVAE with 20 groups of latent variables trained on CIFAR-10. We use the same network architectures for the energy network as in Table 7, with potentially different
4https://github.com/bioinf-jku/TTUR 5https://github.com/openai/improved-gan/tree/master/inception_score
normalization techniques discussed below. We spent significant efforts on improving each method we compare against, and we report the settings that led to the best results.
WGAN initialized with NVAE decoder: We initialize the generator with the pre-trained NVAE decoder, and the discriminator is initialized by a CIFAR-10 energy network with random weights. We use spectral normalization and batch normalization in the discriminator as we found them necessary for convergence. We update the discriminator using the Adam optimizer with constant learning rate 5e−5, and update the generator using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. We train the generator and discriminator for 40k iterations, and we reach convergence of sample quality towards the end of training.
EBM on x, w/ or w/o initializing MCMC with NVAE samples: We train two EBMs on data space similar to Du & Mordatch (2019), where for one of them, we use the pre-trained NVAE to initialize the MCMC chains that draw samples during training. The setting for training these two EBMs are the same except for the initialization of MCMC. We use spectral normalization in the energy network and energy regularization in the training objective as done in Du & Mordatch (2019) because we found these modifications to improve performance. We train the energy function using the Adam optimizer with constant learning rate 1e−4. We train for 100k iterations, and we reach convergence of sample quality towards the end of training. During training, we draw samples from the model following the MCMC settings in Du & Mordatch (2019). In particular, we use persistent sampling and sample from the sample replay buffer with probability 0.95. We run 60 steps of Langevin dynamics to generate negative samples and we clip gradients to have individual value magnitudes of less than 0.01. We use a step size of 10 for each step of Langevin dynamics. For test time sampling, we generate samples by running 150 steps of LD with the same settings as during training.
VAEBM withDKL(pθ(x)||hψ,θ(x)) loss: We use the same network structure forEψ as in VAEBM. We find persistent sampling significantly hurts the performance in this case, possibly due to the fact that the decoder is updated and hence the initial samples from the decoder change throughout training. Therefore, we do not use persistent training. We train the energy function using the Adam optimizer with constant learning rate 5e−5. We draw negative samples by running 10 steps of LD with step size 8e−5. We update the decoder with the gradient in Eq. 21 using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. For test time sampling, we run 15 steps of LD with step size 5e−6. VAEBM: The training of VAEBM in this section largely follows the settings described in Appendix E. We use the same energy network as for CIFAR-10, and we train using the Adam optimizer with constant learning rate 5e−5. Again, we found that the performance of VAEBM with or without persistent sampling is similar. We adopt persistent sampling in this section because it is faster. The setting for the buffer is the same as in Appendix E. We run 5 steps of LD with step size 8e−5 during training, and 15 steps of LD with the same step size in testing.
G QUALITATIVE RESULTS OF ABLATION STUDY
In Figure 7, we show qualitative samples from models corresponding to each item in Table 4, as well as samples generated by VAEBM with additional DKL(pθ(x)||hψ,θ(x)) loss.
H ADDITIONAL QUALITATIVE RESULTS
We present additional qualitative results in this section.
Additional samples and visualizations of MCMC on CIFAR-10 are in Figures 8 and 9, respectively.
Additional samples on CelebA 64 are in Figure 10.
Additional samples on LSUN Church 64 are in Figure 11. We visualize the effect of running MCMC by displaying sample pairs before and after MCMC in Figure 12.
Additional samples on CelebA HQ 256 generated by initializing VAE samples with temperature 0.7 are shown in Figure 13. Samples generated by initializing VAE samples with full temperature 1.0 are shown in Figure 14. We visualize the effect of running MCMC by displaying sample pairs
before and after MCMC in Figure 15. Note that the samples used to visualize MCMC are generated by initializing MCMC chains with VAE samples with full temperature 1.0.
I NEAREST NEIGHBORS
We show nearest neighbors in the training set with generated samples on CIFAR-10 (in Figure 16 and 17) and CelebA HQ 256 (in Figure 18 and 19). We observe that the nearest neighbors are significantly different from the samples, suggesting that our models generalize well.
J SETTINGS OF SAMPLING SPEED EXPERIMENT
We use the official implementation and checkpoints of NCSN at https://github.com/ ermongroup/ncsn. We run the experiments on a computer with a Titan RTX GPU. We use PyTorch 1.5.0 and CUDA 10.2. | 1. What is the focus of the paper regarding EBMs and their combination with VAEs?
2. What are the strengths of the proposed approach in terms of generative performance?
3. What are the weaknesses of the paper, particularly regarding the evaluation of the trained model?
4. Do you have any questions or suggestions regarding the use of Langevin sampling steps or compositing with other models? | Review | Review
Strengths: The paper provides a thorough overview of recent work towards training EBMs.
The approach generates high quality image samples by combining EBMs and VAE based models. The paper is well written and is easy to follow I find it quite interesting that a combination of both models leads to significant overall improved generative performance I also enjoyed the proposed change in the paper -- and it seems to elegantly solve several problem in EBM training.
Weaknesses: My most major concern is that since we are utilizing a maximum likelihood objective to train models, it would be good to evaluate the overall likelihood of the trained model, even if only in the 2D domain. The histogram of likelihoods of data points is a bit disappointing -- it falls a similar trend of other EBM models, but it would nicer if it followed a Gaussian distribution What happens when more Langevin sampling steps are applied to the model? (greater than the few used in training) I'm also curious on what sampling only the trained energy model looks like (without using the trained VAE parameterization) at evaluation time I would also be curious to see how the trained EBM, with the VAE generator can compose together with other models. See for example [1].
[1] Yilun Du, Shuang Li, Igor Mordatch. Compositional Visual Generation and Inference with Energy Based Models. NeurIPS 2020
Post Rebuttal-Update
I thank the authors for responding to my concerns. I enjoyed reading the paper and maintain my rating. |
ICLR | Title
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Abstract
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE’s latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256×256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.
1 INTRODUCTION
Deep generative learning is a central problem in machine learning. It has found diverse applications, ranging from image (Brock et al., 2018; Karras et al., 2019; Razavi et al., 2019), music (Dhariwal et al., 2020) and speech (Ping et al., 2020; Oord et al., 2016a) generation, distribution alignment across domains (Zhu et al., 2017; Liu et al., 2017; Tzeng et al., 2017) and semi-supervised learning (Kingma et al., 2014; Izmailov et al., 2020) to 3D point cloud generation (Yang et al., 2019), light-transport simulation (Müller et al., 2019), molecular modeling (Sanchez-Lengeling & AspuruGuzik, 2018; Noé et al., 2019) and equivariant sampling in theoretical physics (Kanwar et al., 2020).
Among competing frameworks, likelihood-based models include variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016), autoregressive models (Oord et al., 2016b), and energy-based models (EBMs) (Lecun et al., 2006; Salakhutdinov et al., 2007). These models are trained by maximizing the data likelihood under the model, and unlike generative adversarial networks (GANs) (Goodfellow et al., 2014), their training is usually stable and they cover modes in data more faithfully by construction.
Among likelihood-based models, EBMs model the unnormalized data density by assigning low energy to high-probability regions in the data space (Xie et al., 2016; Du & Mordatch, 2019). EBMs are appealing because they require almost no restrictions on network architectures (unlike normalizing flows) and are therefore potentially very expressive. They also exhibit better robustness and out-of-distribution generalization (Du & Mordatch, 2019) because, during training, areas with high probability under the model but low probability under the data distribution are penalized explicitly. However, training and sampling EBMs usually requires MCMC, which can suffer from slow mode mixing and is computationally expensive when neural networks represent the energy function.
∗Work done during an internship at NVIDIA
On the other hand, VAEs are computationally more efficient for sampling than EBMs, as they do not require running expensive MCMC steps. VAEs also do not suffer from expressivity limitations that normalizing flows face (Dupont et al., 2019; Kong & Chaudhuri, 2020), and in fact, they have recently shown state-of-the-art generative results among non-autoregressive likelihood-based models (Vahdat & Kautz, 2020). Moreover, VAEs naturally come with a latent embedding of data that allows fast traverse of the data manifold by moving in the latent space and mapping the movements to the data space. However, VAEs tend to assign high probability to regions with low density under the data distribution. This often results in blurry or corrupted samples generated by VAEs. This also explains why VAEs often fail at out-of-distribution detection (Nalisnick et al., 2019).
In this paper, we propose a novel generative model as a symbiotic composition of a VAE and an EBM (VAEBM) that combines the best of both. VAEBM defines the generative distribution as the product of a VAE generator and an EBM component defined in pixel space. Intuitively, the VAE captures the majority of the mode structure in the data distribution. However, it may still generate samples from low-probability regions in the data space. Thus, the energy function focuses on refining the details and reducing the likelihood of non-data-like regions, which leads to significantly improved samples.
Moreover, we show that training VAEBM by maximizing the data likelihood easily decomposes into training the VAE and the EBM component separately. The VAE is trained using the reparameterization trick, while the EBM component requires sampling from the joint energy-based model during training. We show that we can sidestep the difficulties of sampling from VAEBM, by reparametrizing the MCMC updates using VAE’s latent variables. This allows MCMC chains to quickly traverse the model distribution and it speeds up mixing. As a result, we only need to run short chains to obtain approximate samples from the model, accelerating both training and sampling at test time.
Experimental results show that our model outperforms previous EBMs and state-of-the-art VAEs on image generation benchmarks including CIFAR-10, CelebA 64, LSUN Church 64, and CelebA HQ 256 by a large margin, reducing the gap with GANs. We also show that our model covers the modes in the data distribution faithfully, while having less spurious modes for out-of-distribution data. To the best of knowledge, VAEBM is the first successful EBM applied to large images.
In summary, this paper makes the following contributions: i) We propose a new generative model using the product of a VAE generator and an EBM defined in the data space. ii) We show how training this model can be decomposed into training the VAE first, and then training the EBM component. iii) We show how MCMC sampling from VAEBM can be pushed to the VAE’s latent space, accelerating sampling. iv) We demonstrate state-of-the-art image synthesis quality among likelihood-based models, confirm complete mode coverage, and show strong out-of-distribution detection performance.
2 BACKGROUND
Energy-based Models: An EBM assumes pψ(x) to be a Gibbs distribution of the form pψ(x) = exp (−Eψ(x)) /Zψ , where Eψ(x) is the energy function with parameters ψ and Zψ =∫ x exp (−Eψ(x)) dx is the normalization constant. There is no restriction on the particular form of Eψ(x). Given a set of samples drawn from the data distribution pd(x), the goal of maximum likelihood learning is to maximize the log-likelihood L(ψ) = Ex∼pd(x) [log pψ(x)], which has the derivative (Woodford, 2006):
∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼pψ(x) [∂ψEψ (x)] (1)
For the first expectation, the positive phase, samples are drawn from the data distribution pd(x), and for the second expectation, the negative phase, samples are drawn from the model pψ(x) itself. However, sampling from pψ(x) in the negative phase is itself intractable and approximate samples are usually drawn using MCMC. A commonly used MCMC algorithm is Langevin dynamics (LD) (Neal, 1993). Given an initial sample x0, Langevin dynamics iteratively updates it as:
xt+1 = xt − η
2 ∇xEψ(xt) +
√ ηωt, ωt ∼ N (0, I), (2)
where η is the step-size.1 In practice, Eq. 2 is run for finite iterations, which yields a Markov chain with an invariant distribution approximately close to the original target distribution.
1In principle one would require an accept/reject step to make it a rigorous MCMC algorithm, but for sufficiently small stepsizes this is not necessary in practice (Neal, 1993).
Variational Autoencoders: VAEs define a generative model of the form pθ(x, z) = pθ(z)pθ(x|z), where z is the latent variable with prior pθ(z), and pθ(x|z) is a conditional distribution that models the likelihood of data x given z. The goal of training is to maximize the marginal log-likelihood log pθ(x) given a set of training examples. However since the marginalization is intractable, instead, the variational lower bound on log pθ(x) is maximized with qφ(z|x) as the approximate posterior:
log pθ(x) ≥ Ez∼qφ(z|x) [log pθ(x|z)]−DKL [qφ(z|x)‖pθ(z)] := Lvae(x, θ, φ). (3)
The state-of-the-art VAE, NVAE (Vahdat & Kautz, 2020), increases the expressivity of both prior and approximate posterior using hierarchical latent variables (Kingma et al., 2016) where z is decomposed into a set of disjoint groups, z = {z1, z1, . . . , zL}, and the prior pθ(z) = ∏ l pθ(zl|z<l)
and the approximate posterior qφ(z|x) = ∏ l qφ(zl|z<l,x) are defined using autoregressive distributions over the groups. We refer readers to Vahdat & Kautz (2020) for more details.
3 ENERGY-BASED VARIATIONAL AUTOENCODERS
One of the main problems of VAEs is that they tend to assign high probability to regions in data space that have low probability under the data distribution. To tackle this issue, we propose VAEBM, a generative model constructed by the product of a VAE generator and an EBM component defined in the data space. This formulation allows our model to capture the main mode structure of the data distribution using the VAE. But when training the joint VAEBM, in the negative training phase we sample from the model itself and can discover non-data-like samples, whose likelihood is then reduced by the energy function explicitly. The energy function defined in the pixel space also shares similarities with discriminator in GANs, which can generate crisp and detailed images.
Formally, we define the generative model in VAEBM as hψ,θ(x, z) = 1Zψ,θ pθ(x, z)e −Eψ(x) where pθ(x, z) = pθ(z)pθ(x|z) is a VAE generator and Eψ(x) is a neural network-based energy function, operating only in the x space, and Zψ,θ = ∫ pθ(x)e
−Eψ(x)dx is the normalization constant. VAEBM is visualized in Fig. 1. Marginalizing out the latent variable z gives
hψ,θ(x) = 1
Zψ,θ
∫ pθ(x, z)e −Eψ(x)dz = 1
Zψ,θ pθ(x)e
−Eψ(x). (4)
Given a training dataset, the parameters of VAEBM, ψ, θ, are trained by maximizing the marginal log-likelihood on the training data:
log hψ,θ(x) = log pθ(x)− Eψ(x)− logZψ,θ (5) ≥ Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)||p(z))︸ ︷︷ ︸
Lvae(x,θ,φ)
−Eψ(x)− logZψ,θ︸ ︷︷ ︸ LEBM(x,ψ,θ) , (6)
where we replace log pθ(x) with the variational lower bound from Eq. 3. Eq. 6 forms the objective function for training VAEBM. The first term corresponds to the VAE objective and the second term corresponds to training the EBM component. Next, we discuss how we can optimize this objective.
3.1 TRAINING
The LEBM(x, ψ, θ) term in Eq. 6 is similar to the EBM training objective except that the log partition function depends on both ψ and θ. We show in Appendix A that logZψ,θ has the gradients ∂ψ logZψ,θ = Ex∼hψ,θ(x,z) [−∂ψEψ (x)] and ∂θ logZψ,θ = Ex∼hψ,θ(x,z) [∂θ log pθ(x)] . The first gradient can be estimated easily by evaluating the gradient of the energy function at samples drawn from the VAEBM model hψ,θ(x, z) using MCMC. However, the second term involves computing the intractable ∂∂θ log pθ(x). In Appendix A, we show that estimating ∂ ∂θ log pθ(x) requires sampling from the VAE’s posterior distribution, given model samples x ∼ hψ,θ(x, z). To avoid the computational complexity of estimating this term, for example with a second round of MCMC, we propose a two-stage algorithm for training VAEBM. In the first stage, we train the VAE model in our VAEBM by maximizing the Lvae(x, θ, φ) term in Eq. 6. This term is identical to the VAE’s objective, thus, the parameters θ and φ are trained using the reparameterized trick as in Sec. 2. In the second stage, we keep the VAE model fixed and only train the EBM component. Since θ is now fixed, we only require optimizing LEBM(x, ψ, θ) w.r.t. ψ, the parameters of the energy function. The gradient of L(ψ) = Ex∼pd [LEBM(x, ψ, θ)] w.r.t. ψ is: ∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼hψ,θ(x,z) [∂ψEψ (x)] , (7) which decomposes into a positive and a negative phase, as discussed in Sec. 2.
Reparametrized sampling in the negative phase: For gradient estimation in the negative phase, we can draw samples from the model using MCMC. Naively, we can perform ancestral sampling, first sampling from the prior pθ(z), then running MCMC for pθ(x|z)e−Eψ(x) in x-space. This is problematic, since pθ(x|z) is often sharp and MCMC cannot mix when the conditioning z is fixed. In this work, we instead run the MCMC iterations in the joint space of z and x. Furthermore, we accelerate the sampling procedure using reparametrization for both x and the latent variables z. Recall that when sampling from the VAE, we first sample z ∼ p(z) and then x ∼ pθ(x|z). This sampling scheme can be reparametrized by sampling from a fixed noise distribution (e.g., ( z, x) ∼ p = N (0, I)) and deterministic transformations Tθ such that
z = T zθ ( z), x = T x θ (z( z), x) = T x θ (T z θ ( z), x). (8)
Here, T zθ denotes the transformation defined by the prior that transforms noise z into prior samples z and Txθ represents the decoder that transforms noise x into samples x, given prior samples z. We can apply the same reparameterization when sampling from hψ,θ(x, z). This corresponds to sampling ( x, z) from the “base distribution”:
hψ,θ ( x, z) ∝ e−Eψ(T x θ (T z θ ( z), x))p ( x, z) , (9)
and then transforming them to x and z via Eq. 8 (see Appendix B for derivation). Note that z and x have the same scale, as p ( x, z) is a standard Normal distribution, while the scales of x and z can be very different. Thus, running MCMC sampling with this reparameterization in the ( x, z)space has the benefit that we do not need to tune the sampling scheme (e.g., step size in LD) for each variable. This is particularly helpful when z itself has multiple groups, as in our case.
The advantages of two-stage training: Besides avoiding the difficulties of estimating the full gradient of logZψ,θ, two-stage training has additional advantages. As we discussed above, updating ψ is computationally expensive, as each update requires an iterative MCMC procedure to draw samples from the model. The first stage of our training minimizes the distance between the VAE model and the data distribution, and in the second stage, the EBM further reduce the mismatch between the model and the data distribution. As the pre-trained VAE pθ(x) provides a good approximation to pd(x) already, we expect that a relatively small number of expensive updates for training ψ is needed. Moreover, the pre-trained VAE provides a latent space with an effectively lower dimensionality and a smoother distribution than the data distribution, which facilitates more efficient MCMC.
Alternative extensions: During the training of the energy function, we fix the VAE’s parameters. In Appendix C, we discuss a possible extension to our training objective that also updates the VAE.
4 RELATED WORK
Early variants of EBMs include models whose energy is defined over both data and auxiliary latent variables (Salakhutdinov & Hinton, 2009; Hinton, 2012), and models using only data variables (Hinton, 2002; Mnih & Hinton, 2005). Their energy functions are simple and they do not scale to high
dimensional data. Recently, it was shown that EBMs with deep neural networks as energy function can successfully model complex data such as natural images (Du & Mordatch, 2019; Nijkamp et al., 2019b;a). They are trained with maximum likelihood and only model the data variable. Joint EBMs (Grathwohl et al., 2020a; Liu & Abbeel, 2020) model the joint distribution of data and labels. In contrast, our VAEBM models the joint distribution of data and general latent variables.
Besides fundamental maximum likelihood training, other techniques to train EBMs exist, such as minimizing F-divergence (Yu et al., 2020a) or Stein discrepancy (Grathwohl et al., 2020b), contrastive estimation (Gutmann & Hyvärinen, 2010; Gao et al., 2020) and denoising score matching (Li et al., 2019). Recently, noise contrastive score networks and diffusion models have demonstrated high quality image synthesis (Song & Ermon, 2019; 2020; Ho et al., 2020). These models are also based on denoising score matching (DSM) (Vincent, 2011), but do not parameterize any explicit energy function and instead directly model the vector-valued score function. We view score-based models as alternatives to EBMs trained with maximum likelihood. Although they do not require iterative MCMC during training, they need very long sampling chains to anneal the noise when sampling from the model (& 1000 steps). Therefore, sample generation is extremely slow.
VAEBM is an EBM with a VAE component, and it shares similarities with work that builds connections between EBMs and other generative models. Zhao et al. (2017); Che et al. (2020); Song et al. (2020); Arbel et al. (2020) formulate EBMs with GANs, and use the discriminator to assign an energy. Xiao et al. (2020); Nijkamp et al. (2020) use normalizing flows that transport complex data to latent variables to facilitate MCMC sampling (Hoffman et al., 2019), and thus, their methods can be viewed as EBMs with flow component. However, due to their topology-preserving nature, normalizing flows cannot easily transport complex multimodal data, and their sample quality on images is limited. A few previous works combine VAEs and EBMs in different ways from ours. Pang et al. (2020) and Vahdat et al. (2018b;a; 2020) use EBMs for the prior distribution, and (Han et al., 2020; 2019) jointly learn a VAE and an EBM with independent sets of parameters by an adversarial game.
Finally, as we propose two-stage training, our work is related to post training of VAEs. Previous work in this direction learns the latent structure of pre-trained VAEs (Dai & Wipf, 2019; Xiao et al., 2019; Ghosh et al., 2020), and sampling from learned latent distributions improves sample quality. These methods cannot easily be extended to VAEs with hierarchical latent variables, as it is difficult to fit the joint distribution of multiple groups of variables. Our purpose for two-stage training is fundamentally different: we post-train an energy function to refine the distribution in data space.
5 EXPERIMENTS
In this section, we evaluate our proposed VAEBM through comprehensive experiments. Specifically, we benchmark sample quality in Sec. 5.1, provide detailed ablation studies on training techniques in Sec. 5.2, and study mode coverage of our model and test for spurious modes in Sec. 5.3. We choose NVAE (Vahdat & Kautz, 2020) as our VAE, which we pre-train, and use a simple ResNet as energy function Eψ , similar to Du & Mordatch (2019). We draw approximate samples both for training and testing by running short Langevin dynamics chains on the distribution in Eq. 9. Note that in NVAE, the prior distribution is a group-wise auto-regressive Gaussian, and the conditional pixel-wise distributions in x are also Gaussian. Therefore, the reparameterization corresponds to shift and scale transformations. For implementation details, please refer to Appendix E.
5.1 IMAGE GENERATION
In Table 1, we quantitatively compare the sample quality of VAEBM with different generative models on (unconditional) CIFAR-10. We adopt Inception Score (IS) (Salimans et al., 2016) and FID (Heusel et al., 2017) as quantitative metrics. Note that FID reflects the sample quality more faithfully, as potential problems have been reported for IS on CIFAR-10 (Barratt & Sharma, 2018).
We observe that our VAEBM outperforms previous EBMs and other explicit likelihood-based models by a large margin. Note that introducing persistent chains during training only leads to slight improvement, while Du & Mordatch (2019) rely on persistent chains with a sample replay buffer. This is likely due to the efficiency of sampling in latent space. Our model also produces significantly better samples than NVAE, the VAE component of our VAEBM, implying a significant impact of our proposed energy-based refinement. We also compare our model with state-of-the-art GANs and
recently proposed score-based models, and we obtain comparable or better results. Thus, we largely close the gap to GANs and score-models, while maintaining the desirable properties of models trained with maximum likelihood, such as fast sampling and better mode coverage.
Qualitative samples generated by our model are shown in Fig. 2a and intermediate samples along MCMC chains in Fig. 2b. We find that VAEBM generates good samples by running only a few MCMC steps. Initializing MCMC chains from the pre-trained VAE also helps quick equilibration.
We also train VAEBM on larger images, including CelebA 64, CelebA HQ 256 (Liu et al., 2015) and LSUN Church 64 (Yu et al., 2015). We report the FID scores for CelebA 64 and CelebA HQ 256 in Tables 2 and 3. On CelebA 64, our model obtains results comparable with the best GANs. Although our model obtains worse results than some advanced GANs on CelebA HQ 256, we significantly
reduce the gap between likelihood based models and GANs on this dataset. On LSUN Church 64, we obtain FID 13.51, which significantly improves the NVAE baseline FID 41.3. We show qualitative samples in Fig. 3. Appendix H contains additional samples and MCMC visualizations.
Our model can produce impressive samples by running very short MCMC chains, however, we find that when we run longer MCMC chains than training chains, most chains stay around the local mode without traversing between modes. We believe that the non-mixing is due to the long mixing time of Langevin Dynamics Neal et al. (2011), as Nijkamp et al. (2019b;a) also observe that models trained with short-run MCMC have non-mixing long-run chains. We conjecture that mixing can be improved by training and sampling with more advanced MCMC techniques that are known to mix faster, such as HMC Neal et al. (2011), and this will be left for future work.
Table 4: Comparison for IS and FID on CIFAR10 between several related training methods.
Model IS↑ FID↓ NVAE (Vahdat & Kautz) 5.19 55.97 EBM on x (Du & Mordatch) 5.85 48.89 EBM on x, MCMC init w/ NVAE 7.28 29.32 WGAN w/ NVAE decoder 7.41 20.39 VAEBM (ours) 8.15 12.96
Table 5: Mode coverage on StackedMNIST.
Model Modes↑ KL↓ VEEGAN (Srivastava et al.) 761.8 2.173 PacGAN (Lin et al.) 992.0 0.277 PresGAN (Dieng et al.) 999.6 0.115 InclusiveGAN (Yu et al.) 997 0.200 StyleGAN2 (Karras et al.) 940 0.424 VAEBM (ours) 1000 0.087
5.2 ABLATION STUDIES
In Table 4, we compare VAEBM to several closely related baselines. All the experiments here are performed on CIFAR-10, and for simplicity, we use smaller models than those used in Table 1. Appendix F summarizes the experimental settings and Appendix G provides qualitative samples.
Data space vs. augmented space: One key difference between VAEBM and previous work such as Du & Mordatch (2019) is that our model is defined on the augmented space (x, z), while their EBM only involves x. Since we pre-train the VAE, one natural question is whether our strong results are due to good initial samples x from the VAE, which are used to launch the MCMC chains. To address this, we train an EBM purely on x as done in Du & Mordatch (2019). We also train another EBM only on x, but we initialize the MCMC chains with samples from the pre-trained NVAE instead of noise. As shown in line 3 of Table 4, this initialization helps the EBM which is defined only on x. However, VAEBM in the augmented space outperforms the EBMs on x only by a large margin.
Adversarial training vs. sampling: The gradient for ψ in Eq. 7 is similar to the gradient updates of WGAN’s discriminator (Arjovsky et al., 2017). The key difference is that we draw (approximate) samples from hψ(x) by MCMC, while WGAN draws negative samples from a generator (Che et al., 2020). WGAN updates the generator by playing an adversarial game, while we only update the energy function Eψ . We compare these two methods by training ψ and θ with the WGAN objective and initializing θ with the NVAE decoder. As shown in line 4 of Table 4, we significantly outperform the WGAN version of our model, implying the advantage of our method over adversarial training.
5.3 TEST FOR SPURIOUS OR MISSING MODES
We evaluate mode coverage on StackedMNIST. This dataset contains images generated by randomly choosing 3 MNIST images and stacking them along the RGB channels. Hence, the data distribution has 1000 modes. Following Lin et al. (2018), we report the number of covered modes and the KL divergence from the categorical distribution over 1000 categories from generated samples to true data (Table 5). VAEBM covers all modes and achieves the lowest KL divergence even compared to GANs that are specifically designed for this task. Hence, our model covers the modes more equally. We also plot the histogram of likelihoods for CIFAR-10 train/test images (Fig. 6, Appendix D) and present nearest neighbors of generated samples (Appendix I). We conclude that we do not overfit.
We evaluate spurious modes in our model by assessing its performance on out-of-distribution (OOD) detection. Specifically, we use VAEBM trained on CIFAR-10, and estimate unnormalized log hψ,θ(x) on in-distribution samples (from CIFAR-10 test set) and OOD samples from several datasets. Following Nalisnick et al. (2019), we use area under the ROC curve (AUROC) as quantitative metric, where high AUROC indicates that the model correctly assigns low density to OOD samples. In Table 6, we see that VAEBM has significantly higher AUROC than NVAE, justifying our argument that the energy function reduces the likelihood of non-data-like regions. VAEBM also performs better than IGEBM and JEM, while worse than HDGE. However, we note that JEM and HDGE are classifier-based models, known to be better for OOD detection (Liang et al., 2018).
5.4 EXACT LIKELIHOOD ESTIMATE ON 2D TOY DATA
VAEBM is an explicit likelihood model with a parameterized density function. However, like other energy-based models, the estimation of the exact likelihood is difficult due to the intractable partition
function logZ. One possible way to estimate the partition function is to use Annealed Importance Sampling (AIS) (Neal, 2001). However, using AIS to estimate logZ in high-dimensional spaces is difficult. In fact, Du & Mordatch (2019) report that the estimation does not converge in 2 days on CIFAR-10. Furthermore, AIS gives a stochastic lower bound on logZ, and therefore the likelihood computed with this estimated logZ would be an upper bound for the true likelihood. This makes the estimated likelihood hard to compare with the VAE’s likelihood estimate, which is usually a lower bound on the true likelihood (Burda et al., 2015).
As a result, to illustrate that our model corrects the distribution learned by the VAE and improves the test likelihood, we conduct additional experiments on a 2-D toy dataset. We use the 25-Gaussians dataset, which is generated by a mixture of 25 two-dimensional isotropic Gaussian distributions arranged in a grid. This dataset is also studied in Che et al. (2020). The encoder and decoder of the VAE have 4 fully connected layers with 256 hidden units, and the dimension of the latent variables is 20. Our energy function has 4 fully connected layers with 256 hidden units.
In the 2-D domain, the partition function logZ can be accurately estimated by a numerical integration scheme. For the VAE, we use the IWAE bound (Burda et al., 2015) with 10,000 posterior samples to estimate its likelihood. We use 100,000 test samples from the true distribution to evaluate the likelihood. Our VAEBM obtains the average log likelihood of -1.50 nats on test samples, which significantly improves the VAE, whose average test likelihood is -2.97 nats. As a reference, we also analytically compute the log likelihood of test samples under the true distribution, and the result is -1.10 nats.
We show samples from the true distribution, VAE and VAEBM in Figure 4. We observe that the VAEBM successfully corrects the distribution learned by the VAE and has better sample quality.
5.5 SAMPLING EFFICIENCY
Despite their impressive sample quality, denoising score matching models (Song & Ermon, 2019; Ho et al., 2020) are slow at sampling, often requiring & 1000 MCMC steps. Since VAEBM uses short MCMC chains, it takes only 8.79 seconds to generate 50 CIFAR-10 samples, whereas NCSN (Song & Ermon, 2019) takes 107.9 seconds, which is about 12× slower (see Appendix J for details).
6 CONCLUSIONS
We propose VAEBM, an energy-based generative model in which the data distribution is defined jointly by a VAE and an energy network, the EBM component of the model. In this joint model, the EBM and the VAE form a symbiotic relationship: the EBM component refines the initial VAEdefined distribution, while the VAE’s latent embedding space is used to accelerate sampling from the joint model and therefore enables efficient training of the energy function. We show that our model can be trained effectively in two stages with a maximum likelihood objective and we can efficiently sample it by running short Langevin dynamics chains. Experimental results demonstrate strong generative performance on several image datasets. Future work includes further scaling up the model to larger images, applying it to other domains, and using more advanced sampling algorithms.
B REPARAMETRIZATION FOR EBM
Suppose we draw the re-parametrization variables ( x, z) ∼ p ( x, z). For convenience, we denote Tθ( x, z) = (T x θ (T z θ ( z), x), T z θ ( z)) = (x, z). (11)
Since Tθ is a deterministic and invertible transformation that maps ( x, z) to (x, z), by the change of variables formula, we can write
pθ(x, z) = p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ , (12) where JT−1θ is the Jacobian of T −1 θ . Consider a Gaussian distribution as a simple example: if z ∼ N (µz, σz) and x|z ∼ N (µx(z), σx(z)), then z = T zθ ( z) = µz + σz · z, x = Txθ ( x, z) = µx(z) + σx(z) · x,
and
JT−1θ (x, z) = [σx(z)
−1, σ−1z ].
2Maximizing ELBO with respect to φ corresponds to minimizing DKL(qφ(z|x)||pθ(z|x)) while θ is fixed.
Recall that the generative model of our EBM is
hψ,θ(x, z) = e−Eψ(x)pθ(x, z)
Zψ,θ . (13)
We can apply the change of variable to hψ,θ(x, z) in similar manner:
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( x, z))| , (14)
where JTθ is the Jacobian of Tθ.
Since we have the relation
Jf−1 ◦ f = J−1f (15)
for invertible function f , we have that
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( z, x))| (16)
= 1
Zψ,θ e−Eψ(Tθ( x, z))pθ(Tθ( x, z)) |det (JTθ ( x, z))| (17)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ ∣∣∣ det (JTθ ( x, z)) ∣∣∣ (18) = 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) (19)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p ( x, z), (20)
which is the distribution in Eq. 9.
After we obtained samples ( x, z) from the distribution in Eq. 20, we obtain (x, z) by applying the transformation Tθ in Eq. 11.
B.1 COMPARISON OF SAMPLING IN ( x, z)-SPACE AND IN (x, z)-SPACE
Above we show that sampling from hψ,θ(x, z) is equivalent to sampling from hψ,θ( x, z) and applying the appropriate variable transformation. Here, we further analyze the connections between sampling from these two distributions with Langevin dynamics. Since each component of x and z can be re-parametrzied with scaling and translation of standard Gaussian noise, without loss of generality, we assume a variable c (c can be a single latent variable in z or a single pixel in x) and write
c = µ+ σ .
Suppose we sample in the space with energy function f on c and step size η. The update for is
t+1 = t − η
2 ∇ f +
√ ηωt, ωt ∼ N (0, I).
Now we plug t+1 into the expression of c while noting that∇ f = σ∇cf . We obtain ct+1 = µ+ σ t+1 = µ+ σ ( t − η
2 ∇ f +
√ ηωt ) = µ+ σ t − σ2η
2 ∇cf +
√ ησ2ωt
= ct − σ2η
2 ∇cf +
√ ησ2ωt.
Therefore, we see that running Langevin dynamics in ( x, z)-space is equivalent to running Langevin dynamics in (x, z)-space with step size for each component of z and x adjusted by its variance. However, considering the high dimensionality of x and z, the step size adjustment is difficult to implement.
The analysis above only considers a variable individually. More importantly, our latent variable z in the prior follows block-wise auto-regressive Gaussian distributions, so the variance of each
component in zi depends on the value of z<i. We foresee that because of this dependency, using a fixed step size per component of z will not be effective, even when it is set differently for each component. In contrast, all the components in ( x, z)-space have a unit variance. Hence, a universal step size for all the variables in this space can be used.
To further provide empirical evidence that adjusting the step size for each variable is necessary, we try sampling directly in (x, z)-space without adjusting the step size (i.e., use a universal step size for all variables). Qualitative results are presented in Figure 5. We examine several choices for the step size and we cannot obtain high-quality samples.
In conclusion, the re-parameterization provides an easy implementation to adjust step size for each variable, and the adjustment is shown to be crucial to obtain good samples.
C EXTENSION TO TRAINING OBJECTIVE
In the first stage of training VAEBM, the VAE model is trained by maximizing the training data log-likelihood which corresponds to minimizing an upper bound on DKL(pd(x)||pθ(x)) w.r.t. θ. In the second stage, when we are training the EBM component, we use the VAE model to sample from the joint VAEBM by running the MCMC updates in the joint space of z and x. Ideally, we may want to bring pθ(x) closer to hψ,θ(x) in the second stage, because when pθ(x) = hψ,θ(x), we will not need the expensive updates for ψ. We can bring pθ(x) closer to hψ,θ(x) by minimizing DKL(pθ(x)||hψ,θ(x)) with respect to θ which was recently discussed in the context of an EBMinterpretation of GANs by Che et al. (2020). To do so, we assume the target distribution hψ,θ(x) is fixed and create a copy of θ, named θ′, and we update θ′ by the gradient:
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′Ex∼pθ′ (x) [Eψ(x)] (21)
In other words, one update step for θ′ that minimizes DKL(p′θ(x)||hψ,θ(x)) w.r.t. θ′ can be easily done by drawing samples from p′θ(x) and minimizing the energy-function w.r.t. θ
′. Note that this approach is similar to the generator update in training Wasserstein GANs (Arjovsky et al., 2017). The above KL objective will encourage pθ(x) to model dominants modes in hψ,θ(x). However, it may cause pθ(x) to drop modes.
C.1 DERIVATION
Our derivation largely follows Appendix A.2 of Che et al. (2020). Note that every time we update θ, we are actually taking the gradient w.r.t θ′, which can be viewed as a copy of θ and is initialized as θ. In particular, we should note that the θ in hψ,θ(x) is fixed. Therefore, we have
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′ ∫ pθ′(x) [log pθ′(x)− log hψ,θ(x)] dx
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx
+ ∫ pθ′(x) [∇θ′ log pθ′(x)−∇θ′ log hψ,θ(x)] dx︸ ︷︷ ︸
=0
(22)
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx, (23)
where the second term in Eq. 22 is 0 because the log hψ,θ(x) does not depend on θ′ and the expectation of the score function is 0:∫
pθ′(x)∇θ′ log pθ′(x)dx = Ex∼pθ′ (x) [∇θ′ log pθ′(x)] = 0.
Recall that θ′ has the same value as θ before the update, so log pθ′(x)− log hψ,θ(x) = log [ pθ′(x)
pθ(x)e−Eψ(x)
] + logZψ,θ
= Eψ(x) + logZψ,θ. (24)
Plug Eq. 24 into Eq. 23, we have ∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∫ ∇θ′pθ′(x) [Eψ(x) + logZψ,θ] dx
= ∇θ′Ex∼pθ′ (x) [Eψ(x)] , (25)
since ∫ ∇θ′pθ′(x) logZψ,θdx = ∇θ′ logZψ,θ ∫ pθ′(x)dx = ∇θ′ logZψ,θ = 0.
C.2 RESULTS
We train VAEBM with an additional loss term that updates the parameter θ to minimize DKL(pθ(x)||hψ,θ(x)) as explained above. Our experiment uses the same initial VAE as in Sec. 5.2, and details of the implementation are introduced in Appendix F. We obtain FID 14.0 and IS 8.05, which is similar to the results of plain VAEBM (FID 12.96 and IS 8.15). Therefore, we conclude that training the model by minimizingDKL(pθ(x)||hψ,θ(x)) does not improve the performance, and updating the decoder is not necessary. This is likely because the initial VAE is pulled as closely as possible to the data distribution already, which is also the target for the joint VAEBM hψ,θ(x).
D COMPARING LIKELIHOODS ON TRAIN AND TEST SET
In Figure 6, we plot a histogram of unnormalized log-likelihoods of 10k CIFAR-10 train set and test set images. We see that our model assigns similar likelihoods to both train and test set images. This indicates that VAEBM generalizes well to unseen data and covers modes in the training data well.
E IMPLEMENTATION DETAILS
In this section, we introduce the details of training and sampling from VAEBM.
NVAE: VAEBM uses NVAE as the pθ(x) component in the model. We train the NVAE with its official implementation3. We largely follow the default settings, with one major difference that we use a Gaussian decoder instead of a discrete logistic mixture decoder as in Vahdat & Kautz (2020). The reason for this is that we can run Langevin dynamics only with continuous variables. The number of latent variable groups for CIFAR-10, CelebA 64, LSUN Church 64 and CelebA HQ 256 are 30, 15, 15 and 20, respectively.
Network for energy function: We largely adopt the energy network structure for CIFAR-10 in Du & Mordatch (2019), and we increase the depth of the network for larger images. There are 2 major differences between our energy networks and the ones used in Du & Mordatch (2019): 1. we replace the LeakyReLU activations with Swish activations, as we found it improves training stability, and 2. we do not use spectral normalization (Miyato et al., 2018); instead, we use weight normalization with data-dependent initialization (Salimans & Kingma, 2016). The network structure for each dataset is presented in Table 7.
Training of energy function: We train the energy function by minimizing the negative log likelihood and an additional spectral regularization loss which penalizes the spectral norm of each convolutional layer in Eψ . The spectral regularization loss is also used in training NVAE, as we found
3https://github.com/NVlabs/NVAE
it helpful to regularize the sharpness of the energy network and better stabilize training. We use a coefficient 0.2 for the spectral regularization loss.
We summarize some key hyper-parameters we used to train VAEBM in Table 8.
On all datasets, we train VAEBM using the Adam optimizer (Kingma & Ba, 2015) and weight decay 3e−5. We use constant learning rates, shown in Table 8. Following Du & Mordatch (2019), we clip training gradients that are more than 3 standard deviations from the 2nd-order Adam parameters.
While persistent sampling using a sample replay buffer has little effect on CIFAR-10, we found it to be useful on large images such as CelebA HQ 256. When we do not use persistent sampling, we always initialize the LD chains with ( x, z), sampled from a standard Gaussian. When we use persistent sampling in training, we keep a sample replay buffer that only stores samples of z, while x is always initialized from a standard Gaussian. The size of the replay buffer is 10,000 for CIFAR10 and LSUN Church 64, and 8,000 for CelebA HQ 256. At every training iteration, we initialize the MCMC chains on z by drawing z from the replay buffer with probability p and from standard Gaussian with probability 1− p. For CIFAR-10 and LSUN Church 64, we linearly increase p from 0 to 0.6 in 5,000 training iterations, and for CelebA HQ 256, we linearly increase p from 0 to 0.6 in 3,000 training iterations. The settings of Langevin dynamics are presented in Table 8.
We do not explicitly set the number of training iterations. Instead, we follow Du & Mordatch (2019) to train the energy network until we cannot generate realistic samples anymore. This happens when the model overfits the training data and hence energies of negative samples are much larger than energies of training data. Typically, training takes around 25,000 iterations (or 16 epochs) on CIFAR-10, 20,000 iterations (or 3 epochs) on CelebA 64, 20,000 iterations (or 5 epochs) on LSUN Church 64, and 9,000 iterations (or 5 epochs) on CelebA HQ 256.
Test time sampling: After training the model, we generate samples for evaluation by running Langvin dynamics with ( x, z) initialized from standard Gaussian, regardless of whether persistent sampling is used in training or not. We run slightly longer LD chains than training to obtain the best sample quality. In particular, our reported values are obtained from running 16 steps of LD for CIFAR-10, 20 steps of LD for CelebA64 and LSUN Church 64, and 24 steps for CelebA HQ 256. The step sizes are the same as training step sizes.
In CelebA HQ 256 dataset, we optionally use low temperature initialization for better visual quality. To do this, we first draw samples from the VAE with low temperature and readjusted the BN statistics as introduced by Vahdat & Kautz (2020), and then initialize the MCMC chain by ( x, z) obtained by encoding the low-temperature samples using VAE’s encoder without readjusted BN statistics.
Evaluation metrics: We use the official implementations of FID4 and IS5. We compute IS using 50k CIFAR 10 samples, and we compute FID between 50k generated samples and training images, except for CelebA HQ 256 where we use 30k training images (the CelebA HQ dataset contains only 30k samples).
F SETTINGS FOR ABLATION STUDY
In this section, we present the details of ablation experiments in Sec. 5.2. Throughout ablation experiments, we use a smaller NVAE with 20 groups of latent variables trained on CIFAR-10. We use the same network architectures for the energy network as in Table 7, with potentially different
4https://github.com/bioinf-jku/TTUR 5https://github.com/openai/improved-gan/tree/master/inception_score
normalization techniques discussed below. We spent significant efforts on improving each method we compare against, and we report the settings that led to the best results.
WGAN initialized with NVAE decoder: We initialize the generator with the pre-trained NVAE decoder, and the discriminator is initialized by a CIFAR-10 energy network with random weights. We use spectral normalization and batch normalization in the discriminator as we found them necessary for convergence. We update the discriminator using the Adam optimizer with constant learning rate 5e−5, and update the generator using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. We train the generator and discriminator for 40k iterations, and we reach convergence of sample quality towards the end of training.
EBM on x, w/ or w/o initializing MCMC with NVAE samples: We train two EBMs on data space similar to Du & Mordatch (2019), where for one of them, we use the pre-trained NVAE to initialize the MCMC chains that draw samples during training. The setting for training these two EBMs are the same except for the initialization of MCMC. We use spectral normalization in the energy network and energy regularization in the training objective as done in Du & Mordatch (2019) because we found these modifications to improve performance. We train the energy function using the Adam optimizer with constant learning rate 1e−4. We train for 100k iterations, and we reach convergence of sample quality towards the end of training. During training, we draw samples from the model following the MCMC settings in Du & Mordatch (2019). In particular, we use persistent sampling and sample from the sample replay buffer with probability 0.95. We run 60 steps of Langevin dynamics to generate negative samples and we clip gradients to have individual value magnitudes of less than 0.01. We use a step size of 10 for each step of Langevin dynamics. For test time sampling, we generate samples by running 150 steps of LD with the same settings as during training.
VAEBM withDKL(pθ(x)||hψ,θ(x)) loss: We use the same network structure forEψ as in VAEBM. We find persistent sampling significantly hurts the performance in this case, possibly due to the fact that the decoder is updated and hence the initial samples from the decoder change throughout training. Therefore, we do not use persistent training. We train the energy function using the Adam optimizer with constant learning rate 5e−5. We draw negative samples by running 10 steps of LD with step size 8e−5. We update the decoder with the gradient in Eq. 21 using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. For test time sampling, we run 15 steps of LD with step size 5e−6. VAEBM: The training of VAEBM in this section largely follows the settings described in Appendix E. We use the same energy network as for CIFAR-10, and we train using the Adam optimizer with constant learning rate 5e−5. Again, we found that the performance of VAEBM with or without persistent sampling is similar. We adopt persistent sampling in this section because it is faster. The setting for the buffer is the same as in Appendix E. We run 5 steps of LD with step size 8e−5 during training, and 15 steps of LD with the same step size in testing.
G QUALITATIVE RESULTS OF ABLATION STUDY
In Figure 7, we show qualitative samples from models corresponding to each item in Table 4, as well as samples generated by VAEBM with additional DKL(pθ(x)||hψ,θ(x)) loss.
H ADDITIONAL QUALITATIVE RESULTS
We present additional qualitative results in this section.
Additional samples and visualizations of MCMC on CIFAR-10 are in Figures 8 and 9, respectively.
Additional samples on CelebA 64 are in Figure 10.
Additional samples on LSUN Church 64 are in Figure 11. We visualize the effect of running MCMC by displaying sample pairs before and after MCMC in Figure 12.
Additional samples on CelebA HQ 256 generated by initializing VAE samples with temperature 0.7 are shown in Figure 13. Samples generated by initializing VAE samples with full temperature 1.0 are shown in Figure 14. We visualize the effect of running MCMC by displaying sample pairs
before and after MCMC in Figure 15. Note that the samples used to visualize MCMC are generated by initializing MCMC chains with VAE samples with full temperature 1.0.
I NEAREST NEIGHBORS
We show nearest neighbors in the training set with generated samples on CIFAR-10 (in Figure 16 and 17) and CelebA HQ 256 (in Figure 18 and 19). We observe that the nearest neighbors are significantly different from the samples, suggesting that our models generalize well.
J SETTINGS OF SAMPLING SPEED EXPERIMENT
We use the official implementation and checkpoints of NCSN at https://github.com/ ermongroup/ncsn. We run the experiments on a computer with a Titan RTX GPU. We use PyTorch 1.5.0 and CUDA 10.2. | 1. What is the novel approach proposed by the paper in the field of generative modeling?
2. What are the strengths of the proposed method, particularly in its simplicity and ease of understanding?
3. What are the weaknesses of the paper, especially in terms of its experimental comparisons and architectural designs?
4. How could the authors improve their experiments to provide more convincing evidence for the effectiveness of their proposed method? | Review | Review
Pros: this method proposed to use VAE+EBM for generative modelling. Unlike other VAE+GAN/EBM-liked model, it added a EBM after VAE. Overall method is easy to understand and follow. To accelerate the training, the authors also applied a buffer to store the previous examples for easy sampling.
Cons: In the experiment, the authors compared other models with VAEBM, it is reasonable to compare the results with reported scores in other works, however, since the architecture is a fairly important factor (such that swish instead relu, resblock instead of cnn, weight norm instead of spectral norm), etc, is it also reasonable that the improvement is partially contributed by such design of architecture. So I will suggest that the authors should use the same architecture design (choose other one or two models for all tasks), and test whether the proposed method can actually gain that much of improvement. |
ICLR | Title
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Abstract
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE’s latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256×256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.
1 INTRODUCTION
Deep generative learning is a central problem in machine learning. It has found diverse applications, ranging from image (Brock et al., 2018; Karras et al., 2019; Razavi et al., 2019), music (Dhariwal et al., 2020) and speech (Ping et al., 2020; Oord et al., 2016a) generation, distribution alignment across domains (Zhu et al., 2017; Liu et al., 2017; Tzeng et al., 2017) and semi-supervised learning (Kingma et al., 2014; Izmailov et al., 2020) to 3D point cloud generation (Yang et al., 2019), light-transport simulation (Müller et al., 2019), molecular modeling (Sanchez-Lengeling & AspuruGuzik, 2018; Noé et al., 2019) and equivariant sampling in theoretical physics (Kanwar et al., 2020).
Among competing frameworks, likelihood-based models include variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016), autoregressive models (Oord et al., 2016b), and energy-based models (EBMs) (Lecun et al., 2006; Salakhutdinov et al., 2007). These models are trained by maximizing the data likelihood under the model, and unlike generative adversarial networks (GANs) (Goodfellow et al., 2014), their training is usually stable and they cover modes in data more faithfully by construction.
Among likelihood-based models, EBMs model the unnormalized data density by assigning low energy to high-probability regions in the data space (Xie et al., 2016; Du & Mordatch, 2019). EBMs are appealing because they require almost no restrictions on network architectures (unlike normalizing flows) and are therefore potentially very expressive. They also exhibit better robustness and out-of-distribution generalization (Du & Mordatch, 2019) because, during training, areas with high probability under the model but low probability under the data distribution are penalized explicitly. However, training and sampling EBMs usually requires MCMC, which can suffer from slow mode mixing and is computationally expensive when neural networks represent the energy function.
∗Work done during an internship at NVIDIA
On the other hand, VAEs are computationally more efficient for sampling than EBMs, as they do not require running expensive MCMC steps. VAEs also do not suffer from expressivity limitations that normalizing flows face (Dupont et al., 2019; Kong & Chaudhuri, 2020), and in fact, they have recently shown state-of-the-art generative results among non-autoregressive likelihood-based models (Vahdat & Kautz, 2020). Moreover, VAEs naturally come with a latent embedding of data that allows fast traverse of the data manifold by moving in the latent space and mapping the movements to the data space. However, VAEs tend to assign high probability to regions with low density under the data distribution. This often results in blurry or corrupted samples generated by VAEs. This also explains why VAEs often fail at out-of-distribution detection (Nalisnick et al., 2019).
In this paper, we propose a novel generative model as a symbiotic composition of a VAE and an EBM (VAEBM) that combines the best of both. VAEBM defines the generative distribution as the product of a VAE generator and an EBM component defined in pixel space. Intuitively, the VAE captures the majority of the mode structure in the data distribution. However, it may still generate samples from low-probability regions in the data space. Thus, the energy function focuses on refining the details and reducing the likelihood of non-data-like regions, which leads to significantly improved samples.
Moreover, we show that training VAEBM by maximizing the data likelihood easily decomposes into training the VAE and the EBM component separately. The VAE is trained using the reparameterization trick, while the EBM component requires sampling from the joint energy-based model during training. We show that we can sidestep the difficulties of sampling from VAEBM, by reparametrizing the MCMC updates using VAE’s latent variables. This allows MCMC chains to quickly traverse the model distribution and it speeds up mixing. As a result, we only need to run short chains to obtain approximate samples from the model, accelerating both training and sampling at test time.
Experimental results show that our model outperforms previous EBMs and state-of-the-art VAEs on image generation benchmarks including CIFAR-10, CelebA 64, LSUN Church 64, and CelebA HQ 256 by a large margin, reducing the gap with GANs. We also show that our model covers the modes in the data distribution faithfully, while having less spurious modes for out-of-distribution data. To the best of knowledge, VAEBM is the first successful EBM applied to large images.
In summary, this paper makes the following contributions: i) We propose a new generative model using the product of a VAE generator and an EBM defined in the data space. ii) We show how training this model can be decomposed into training the VAE first, and then training the EBM component. iii) We show how MCMC sampling from VAEBM can be pushed to the VAE’s latent space, accelerating sampling. iv) We demonstrate state-of-the-art image synthesis quality among likelihood-based models, confirm complete mode coverage, and show strong out-of-distribution detection performance.
2 BACKGROUND
Energy-based Models: An EBM assumes pψ(x) to be a Gibbs distribution of the form pψ(x) = exp (−Eψ(x)) /Zψ , where Eψ(x) is the energy function with parameters ψ and Zψ =∫ x exp (−Eψ(x)) dx is the normalization constant. There is no restriction on the particular form of Eψ(x). Given a set of samples drawn from the data distribution pd(x), the goal of maximum likelihood learning is to maximize the log-likelihood L(ψ) = Ex∼pd(x) [log pψ(x)], which has the derivative (Woodford, 2006):
∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼pψ(x) [∂ψEψ (x)] (1)
For the first expectation, the positive phase, samples are drawn from the data distribution pd(x), and for the second expectation, the negative phase, samples are drawn from the model pψ(x) itself. However, sampling from pψ(x) in the negative phase is itself intractable and approximate samples are usually drawn using MCMC. A commonly used MCMC algorithm is Langevin dynamics (LD) (Neal, 1993). Given an initial sample x0, Langevin dynamics iteratively updates it as:
xt+1 = xt − η
2 ∇xEψ(xt) +
√ ηωt, ωt ∼ N (0, I), (2)
where η is the step-size.1 In practice, Eq. 2 is run for finite iterations, which yields a Markov chain with an invariant distribution approximately close to the original target distribution.
1In principle one would require an accept/reject step to make it a rigorous MCMC algorithm, but for sufficiently small stepsizes this is not necessary in practice (Neal, 1993).
Variational Autoencoders: VAEs define a generative model of the form pθ(x, z) = pθ(z)pθ(x|z), where z is the latent variable with prior pθ(z), and pθ(x|z) is a conditional distribution that models the likelihood of data x given z. The goal of training is to maximize the marginal log-likelihood log pθ(x) given a set of training examples. However since the marginalization is intractable, instead, the variational lower bound on log pθ(x) is maximized with qφ(z|x) as the approximate posterior:
log pθ(x) ≥ Ez∼qφ(z|x) [log pθ(x|z)]−DKL [qφ(z|x)‖pθ(z)] := Lvae(x, θ, φ). (3)
The state-of-the-art VAE, NVAE (Vahdat & Kautz, 2020), increases the expressivity of both prior and approximate posterior using hierarchical latent variables (Kingma et al., 2016) where z is decomposed into a set of disjoint groups, z = {z1, z1, . . . , zL}, and the prior pθ(z) = ∏ l pθ(zl|z<l)
and the approximate posterior qφ(z|x) = ∏ l qφ(zl|z<l,x) are defined using autoregressive distributions over the groups. We refer readers to Vahdat & Kautz (2020) for more details.
3 ENERGY-BASED VARIATIONAL AUTOENCODERS
One of the main problems of VAEs is that they tend to assign high probability to regions in data space that have low probability under the data distribution. To tackle this issue, we propose VAEBM, a generative model constructed by the product of a VAE generator and an EBM component defined in the data space. This formulation allows our model to capture the main mode structure of the data distribution using the VAE. But when training the joint VAEBM, in the negative training phase we sample from the model itself and can discover non-data-like samples, whose likelihood is then reduced by the energy function explicitly. The energy function defined in the pixel space also shares similarities with discriminator in GANs, which can generate crisp and detailed images.
Formally, we define the generative model in VAEBM as hψ,θ(x, z) = 1Zψ,θ pθ(x, z)e −Eψ(x) where pθ(x, z) = pθ(z)pθ(x|z) is a VAE generator and Eψ(x) is a neural network-based energy function, operating only in the x space, and Zψ,θ = ∫ pθ(x)e
−Eψ(x)dx is the normalization constant. VAEBM is visualized in Fig. 1. Marginalizing out the latent variable z gives
hψ,θ(x) = 1
Zψ,θ
∫ pθ(x, z)e −Eψ(x)dz = 1
Zψ,θ pθ(x)e
−Eψ(x). (4)
Given a training dataset, the parameters of VAEBM, ψ, θ, are trained by maximizing the marginal log-likelihood on the training data:
log hψ,θ(x) = log pθ(x)− Eψ(x)− logZψ,θ (5) ≥ Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z|x)||p(z))︸ ︷︷ ︸
Lvae(x,θ,φ)
−Eψ(x)− logZψ,θ︸ ︷︷ ︸ LEBM(x,ψ,θ) , (6)
where we replace log pθ(x) with the variational lower bound from Eq. 3. Eq. 6 forms the objective function for training VAEBM. The first term corresponds to the VAE objective and the second term corresponds to training the EBM component. Next, we discuss how we can optimize this objective.
3.1 TRAINING
The LEBM(x, ψ, θ) term in Eq. 6 is similar to the EBM training objective except that the log partition function depends on both ψ and θ. We show in Appendix A that logZψ,θ has the gradients ∂ψ logZψ,θ = Ex∼hψ,θ(x,z) [−∂ψEψ (x)] and ∂θ logZψ,θ = Ex∼hψ,θ(x,z) [∂θ log pθ(x)] . The first gradient can be estimated easily by evaluating the gradient of the energy function at samples drawn from the VAEBM model hψ,θ(x, z) using MCMC. However, the second term involves computing the intractable ∂∂θ log pθ(x). In Appendix A, we show that estimating ∂ ∂θ log pθ(x) requires sampling from the VAE’s posterior distribution, given model samples x ∼ hψ,θ(x, z). To avoid the computational complexity of estimating this term, for example with a second round of MCMC, we propose a two-stage algorithm for training VAEBM. In the first stage, we train the VAE model in our VAEBM by maximizing the Lvae(x, θ, φ) term in Eq. 6. This term is identical to the VAE’s objective, thus, the parameters θ and φ are trained using the reparameterized trick as in Sec. 2. In the second stage, we keep the VAE model fixed and only train the EBM component. Since θ is now fixed, we only require optimizing LEBM(x, ψ, θ) w.r.t. ψ, the parameters of the energy function. The gradient of L(ψ) = Ex∼pd [LEBM(x, ψ, θ)] w.r.t. ψ is: ∂ψL(ψ) = Ex∼pd(x) [−∂ψEψ (x)] + Ex∼hψ,θ(x,z) [∂ψEψ (x)] , (7) which decomposes into a positive and a negative phase, as discussed in Sec. 2.
Reparametrized sampling in the negative phase: For gradient estimation in the negative phase, we can draw samples from the model using MCMC. Naively, we can perform ancestral sampling, first sampling from the prior pθ(z), then running MCMC for pθ(x|z)e−Eψ(x) in x-space. This is problematic, since pθ(x|z) is often sharp and MCMC cannot mix when the conditioning z is fixed. In this work, we instead run the MCMC iterations in the joint space of z and x. Furthermore, we accelerate the sampling procedure using reparametrization for both x and the latent variables z. Recall that when sampling from the VAE, we first sample z ∼ p(z) and then x ∼ pθ(x|z). This sampling scheme can be reparametrized by sampling from a fixed noise distribution (e.g., ( z, x) ∼ p = N (0, I)) and deterministic transformations Tθ such that
z = T zθ ( z), x = T x θ (z( z), x) = T x θ (T z θ ( z), x). (8)
Here, T zθ denotes the transformation defined by the prior that transforms noise z into prior samples z and Txθ represents the decoder that transforms noise x into samples x, given prior samples z. We can apply the same reparameterization when sampling from hψ,θ(x, z). This corresponds to sampling ( x, z) from the “base distribution”:
hψ,θ ( x, z) ∝ e−Eψ(T x θ (T z θ ( z), x))p ( x, z) , (9)
and then transforming them to x and z via Eq. 8 (see Appendix B for derivation). Note that z and x have the same scale, as p ( x, z) is a standard Normal distribution, while the scales of x and z can be very different. Thus, running MCMC sampling with this reparameterization in the ( x, z)space has the benefit that we do not need to tune the sampling scheme (e.g., step size in LD) for each variable. This is particularly helpful when z itself has multiple groups, as in our case.
The advantages of two-stage training: Besides avoiding the difficulties of estimating the full gradient of logZψ,θ, two-stage training has additional advantages. As we discussed above, updating ψ is computationally expensive, as each update requires an iterative MCMC procedure to draw samples from the model. The first stage of our training minimizes the distance between the VAE model and the data distribution, and in the second stage, the EBM further reduce the mismatch between the model and the data distribution. As the pre-trained VAE pθ(x) provides a good approximation to pd(x) already, we expect that a relatively small number of expensive updates for training ψ is needed. Moreover, the pre-trained VAE provides a latent space with an effectively lower dimensionality and a smoother distribution than the data distribution, which facilitates more efficient MCMC.
Alternative extensions: During the training of the energy function, we fix the VAE’s parameters. In Appendix C, we discuss a possible extension to our training objective that also updates the VAE.
4 RELATED WORK
Early variants of EBMs include models whose energy is defined over both data and auxiliary latent variables (Salakhutdinov & Hinton, 2009; Hinton, 2012), and models using only data variables (Hinton, 2002; Mnih & Hinton, 2005). Their energy functions are simple and they do not scale to high
dimensional data. Recently, it was shown that EBMs with deep neural networks as energy function can successfully model complex data such as natural images (Du & Mordatch, 2019; Nijkamp et al., 2019b;a). They are trained with maximum likelihood and only model the data variable. Joint EBMs (Grathwohl et al., 2020a; Liu & Abbeel, 2020) model the joint distribution of data and labels. In contrast, our VAEBM models the joint distribution of data and general latent variables.
Besides fundamental maximum likelihood training, other techniques to train EBMs exist, such as minimizing F-divergence (Yu et al., 2020a) or Stein discrepancy (Grathwohl et al., 2020b), contrastive estimation (Gutmann & Hyvärinen, 2010; Gao et al., 2020) and denoising score matching (Li et al., 2019). Recently, noise contrastive score networks and diffusion models have demonstrated high quality image synthesis (Song & Ermon, 2019; 2020; Ho et al., 2020). These models are also based on denoising score matching (DSM) (Vincent, 2011), but do not parameterize any explicit energy function and instead directly model the vector-valued score function. We view score-based models as alternatives to EBMs trained with maximum likelihood. Although they do not require iterative MCMC during training, they need very long sampling chains to anneal the noise when sampling from the model (& 1000 steps). Therefore, sample generation is extremely slow.
VAEBM is an EBM with a VAE component, and it shares similarities with work that builds connections between EBMs and other generative models. Zhao et al. (2017); Che et al. (2020); Song et al. (2020); Arbel et al. (2020) formulate EBMs with GANs, and use the discriminator to assign an energy. Xiao et al. (2020); Nijkamp et al. (2020) use normalizing flows that transport complex data to latent variables to facilitate MCMC sampling (Hoffman et al., 2019), and thus, their methods can be viewed as EBMs with flow component. However, due to their topology-preserving nature, normalizing flows cannot easily transport complex multimodal data, and their sample quality on images is limited. A few previous works combine VAEs and EBMs in different ways from ours. Pang et al. (2020) and Vahdat et al. (2018b;a; 2020) use EBMs for the prior distribution, and (Han et al., 2020; 2019) jointly learn a VAE and an EBM with independent sets of parameters by an adversarial game.
Finally, as we propose two-stage training, our work is related to post training of VAEs. Previous work in this direction learns the latent structure of pre-trained VAEs (Dai & Wipf, 2019; Xiao et al., 2019; Ghosh et al., 2020), and sampling from learned latent distributions improves sample quality. These methods cannot easily be extended to VAEs with hierarchical latent variables, as it is difficult to fit the joint distribution of multiple groups of variables. Our purpose for two-stage training is fundamentally different: we post-train an energy function to refine the distribution in data space.
5 EXPERIMENTS
In this section, we evaluate our proposed VAEBM through comprehensive experiments. Specifically, we benchmark sample quality in Sec. 5.1, provide detailed ablation studies on training techniques in Sec. 5.2, and study mode coverage of our model and test for spurious modes in Sec. 5.3. We choose NVAE (Vahdat & Kautz, 2020) as our VAE, which we pre-train, and use a simple ResNet as energy function Eψ , similar to Du & Mordatch (2019). We draw approximate samples both for training and testing by running short Langevin dynamics chains on the distribution in Eq. 9. Note that in NVAE, the prior distribution is a group-wise auto-regressive Gaussian, and the conditional pixel-wise distributions in x are also Gaussian. Therefore, the reparameterization corresponds to shift and scale transformations. For implementation details, please refer to Appendix E.
5.1 IMAGE GENERATION
In Table 1, we quantitatively compare the sample quality of VAEBM with different generative models on (unconditional) CIFAR-10. We adopt Inception Score (IS) (Salimans et al., 2016) and FID (Heusel et al., 2017) as quantitative metrics. Note that FID reflects the sample quality more faithfully, as potential problems have been reported for IS on CIFAR-10 (Barratt & Sharma, 2018).
We observe that our VAEBM outperforms previous EBMs and other explicit likelihood-based models by a large margin. Note that introducing persistent chains during training only leads to slight improvement, while Du & Mordatch (2019) rely on persistent chains with a sample replay buffer. This is likely due to the efficiency of sampling in latent space. Our model also produces significantly better samples than NVAE, the VAE component of our VAEBM, implying a significant impact of our proposed energy-based refinement. We also compare our model with state-of-the-art GANs and
recently proposed score-based models, and we obtain comparable or better results. Thus, we largely close the gap to GANs and score-models, while maintaining the desirable properties of models trained with maximum likelihood, such as fast sampling and better mode coverage.
Qualitative samples generated by our model are shown in Fig. 2a and intermediate samples along MCMC chains in Fig. 2b. We find that VAEBM generates good samples by running only a few MCMC steps. Initializing MCMC chains from the pre-trained VAE also helps quick equilibration.
We also train VAEBM on larger images, including CelebA 64, CelebA HQ 256 (Liu et al., 2015) and LSUN Church 64 (Yu et al., 2015). We report the FID scores for CelebA 64 and CelebA HQ 256 in Tables 2 and 3. On CelebA 64, our model obtains results comparable with the best GANs. Although our model obtains worse results than some advanced GANs on CelebA HQ 256, we significantly
reduce the gap between likelihood based models and GANs on this dataset. On LSUN Church 64, we obtain FID 13.51, which significantly improves the NVAE baseline FID 41.3. We show qualitative samples in Fig. 3. Appendix H contains additional samples and MCMC visualizations.
Our model can produce impressive samples by running very short MCMC chains, however, we find that when we run longer MCMC chains than training chains, most chains stay around the local mode without traversing between modes. We believe that the non-mixing is due to the long mixing time of Langevin Dynamics Neal et al. (2011), as Nijkamp et al. (2019b;a) also observe that models trained with short-run MCMC have non-mixing long-run chains. We conjecture that mixing can be improved by training and sampling with more advanced MCMC techniques that are known to mix faster, such as HMC Neal et al. (2011), and this will be left for future work.
Table 4: Comparison for IS and FID on CIFAR10 between several related training methods.
Model IS↑ FID↓ NVAE (Vahdat & Kautz) 5.19 55.97 EBM on x (Du & Mordatch) 5.85 48.89 EBM on x, MCMC init w/ NVAE 7.28 29.32 WGAN w/ NVAE decoder 7.41 20.39 VAEBM (ours) 8.15 12.96
Table 5: Mode coverage on StackedMNIST.
Model Modes↑ KL↓ VEEGAN (Srivastava et al.) 761.8 2.173 PacGAN (Lin et al.) 992.0 0.277 PresGAN (Dieng et al.) 999.6 0.115 InclusiveGAN (Yu et al.) 997 0.200 StyleGAN2 (Karras et al.) 940 0.424 VAEBM (ours) 1000 0.087
5.2 ABLATION STUDIES
In Table 4, we compare VAEBM to several closely related baselines. All the experiments here are performed on CIFAR-10, and for simplicity, we use smaller models than those used in Table 1. Appendix F summarizes the experimental settings and Appendix G provides qualitative samples.
Data space vs. augmented space: One key difference between VAEBM and previous work such as Du & Mordatch (2019) is that our model is defined on the augmented space (x, z), while their EBM only involves x. Since we pre-train the VAE, one natural question is whether our strong results are due to good initial samples x from the VAE, which are used to launch the MCMC chains. To address this, we train an EBM purely on x as done in Du & Mordatch (2019). We also train another EBM only on x, but we initialize the MCMC chains with samples from the pre-trained NVAE instead of noise. As shown in line 3 of Table 4, this initialization helps the EBM which is defined only on x. However, VAEBM in the augmented space outperforms the EBMs on x only by a large margin.
Adversarial training vs. sampling: The gradient for ψ in Eq. 7 is similar to the gradient updates of WGAN’s discriminator (Arjovsky et al., 2017). The key difference is that we draw (approximate) samples from hψ(x) by MCMC, while WGAN draws negative samples from a generator (Che et al., 2020). WGAN updates the generator by playing an adversarial game, while we only update the energy function Eψ . We compare these two methods by training ψ and θ with the WGAN objective and initializing θ with the NVAE decoder. As shown in line 4 of Table 4, we significantly outperform the WGAN version of our model, implying the advantage of our method over adversarial training.
5.3 TEST FOR SPURIOUS OR MISSING MODES
We evaluate mode coverage on StackedMNIST. This dataset contains images generated by randomly choosing 3 MNIST images and stacking them along the RGB channels. Hence, the data distribution has 1000 modes. Following Lin et al. (2018), we report the number of covered modes and the KL divergence from the categorical distribution over 1000 categories from generated samples to true data (Table 5). VAEBM covers all modes and achieves the lowest KL divergence even compared to GANs that are specifically designed for this task. Hence, our model covers the modes more equally. We also plot the histogram of likelihoods for CIFAR-10 train/test images (Fig. 6, Appendix D) and present nearest neighbors of generated samples (Appendix I). We conclude that we do not overfit.
We evaluate spurious modes in our model by assessing its performance on out-of-distribution (OOD) detection. Specifically, we use VAEBM trained on CIFAR-10, and estimate unnormalized log hψ,θ(x) on in-distribution samples (from CIFAR-10 test set) and OOD samples from several datasets. Following Nalisnick et al. (2019), we use area under the ROC curve (AUROC) as quantitative metric, where high AUROC indicates that the model correctly assigns low density to OOD samples. In Table 6, we see that VAEBM has significantly higher AUROC than NVAE, justifying our argument that the energy function reduces the likelihood of non-data-like regions. VAEBM also performs better than IGEBM and JEM, while worse than HDGE. However, we note that JEM and HDGE are classifier-based models, known to be better for OOD detection (Liang et al., 2018).
5.4 EXACT LIKELIHOOD ESTIMATE ON 2D TOY DATA
VAEBM is an explicit likelihood model with a parameterized density function. However, like other energy-based models, the estimation of the exact likelihood is difficult due to the intractable partition
function logZ. One possible way to estimate the partition function is to use Annealed Importance Sampling (AIS) (Neal, 2001). However, using AIS to estimate logZ in high-dimensional spaces is difficult. In fact, Du & Mordatch (2019) report that the estimation does not converge in 2 days on CIFAR-10. Furthermore, AIS gives a stochastic lower bound on logZ, and therefore the likelihood computed with this estimated logZ would be an upper bound for the true likelihood. This makes the estimated likelihood hard to compare with the VAE’s likelihood estimate, which is usually a lower bound on the true likelihood (Burda et al., 2015).
As a result, to illustrate that our model corrects the distribution learned by the VAE and improves the test likelihood, we conduct additional experiments on a 2-D toy dataset. We use the 25-Gaussians dataset, which is generated by a mixture of 25 two-dimensional isotropic Gaussian distributions arranged in a grid. This dataset is also studied in Che et al. (2020). The encoder and decoder of the VAE have 4 fully connected layers with 256 hidden units, and the dimension of the latent variables is 20. Our energy function has 4 fully connected layers with 256 hidden units.
In the 2-D domain, the partition function logZ can be accurately estimated by a numerical integration scheme. For the VAE, we use the IWAE bound (Burda et al., 2015) with 10,000 posterior samples to estimate its likelihood. We use 100,000 test samples from the true distribution to evaluate the likelihood. Our VAEBM obtains the average log likelihood of -1.50 nats on test samples, which significantly improves the VAE, whose average test likelihood is -2.97 nats. As a reference, we also analytically compute the log likelihood of test samples under the true distribution, and the result is -1.10 nats.
We show samples from the true distribution, VAE and VAEBM in Figure 4. We observe that the VAEBM successfully corrects the distribution learned by the VAE and has better sample quality.
5.5 SAMPLING EFFICIENCY
Despite their impressive sample quality, denoising score matching models (Song & Ermon, 2019; Ho et al., 2020) are slow at sampling, often requiring & 1000 MCMC steps. Since VAEBM uses short MCMC chains, it takes only 8.79 seconds to generate 50 CIFAR-10 samples, whereas NCSN (Song & Ermon, 2019) takes 107.9 seconds, which is about 12× slower (see Appendix J for details).
6 CONCLUSIONS
We propose VAEBM, an energy-based generative model in which the data distribution is defined jointly by a VAE and an energy network, the EBM component of the model. In this joint model, the EBM and the VAE form a symbiotic relationship: the EBM component refines the initial VAEdefined distribution, while the VAE’s latent embedding space is used to accelerate sampling from the joint model and therefore enables efficient training of the energy function. We show that our model can be trained effectively in two stages with a maximum likelihood objective and we can efficiently sample it by running short Langevin dynamics chains. Experimental results demonstrate strong generative performance on several image datasets. Future work includes further scaling up the model to larger images, applying it to other domains, and using more advanced sampling algorithms.
B REPARAMETRIZATION FOR EBM
Suppose we draw the re-parametrization variables ( x, z) ∼ p ( x, z). For convenience, we denote Tθ( x, z) = (T x θ (T z θ ( z), x), T z θ ( z)) = (x, z). (11)
Since Tθ is a deterministic and invertible transformation that maps ( x, z) to (x, z), by the change of variables formula, we can write
pθ(x, z) = p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ , (12) where JT−1θ is the Jacobian of T −1 θ . Consider a Gaussian distribution as a simple example: if z ∼ N (µz, σz) and x|z ∼ N (µx(z), σx(z)), then z = T zθ ( z) = µz + σz · z, x = Txθ ( x, z) = µx(z) + σx(z) · x,
and
JT−1θ (x, z) = [σx(z)
−1, σ−1z ].
2Maximizing ELBO with respect to φ corresponds to minimizing DKL(qφ(z|x)||pθ(z|x)) while θ is fixed.
Recall that the generative model of our EBM is
hψ,θ(x, z) = e−Eψ(x)pθ(x, z)
Zψ,θ . (13)
We can apply the change of variable to hψ,θ(x, z) in similar manner:
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( x, z))| , (14)
where JTθ is the Jacobian of Tθ.
Since we have the relation
Jf−1 ◦ f = J−1f (15)
for invertible function f , we have that
hψ,θ( x, z) = hψ,θ(Tθ( x, z)) |det (JTθ ( z, x))| (16)
= 1
Zψ,θ e−Eψ(Tθ( x, z))pθ(Tθ( x, z)) |det (JTθ ( x, z))| (17)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) ∣∣∣det(JT−1θ (x, z))∣∣∣ ∣∣∣ det (JTθ ( x, z)) ∣∣∣ (18) = 1
Zψ,θ e−Eψ(Tθ( x, z))p (T −1 θ (x, z)) (19)
= 1
Zψ,θ e−Eψ(Tθ( x, z))p ( x, z), (20)
which is the distribution in Eq. 9.
After we obtained samples ( x, z) from the distribution in Eq. 20, we obtain (x, z) by applying the transformation Tθ in Eq. 11.
B.1 COMPARISON OF SAMPLING IN ( x, z)-SPACE AND IN (x, z)-SPACE
Above we show that sampling from hψ,θ(x, z) is equivalent to sampling from hψ,θ( x, z) and applying the appropriate variable transformation. Here, we further analyze the connections between sampling from these two distributions with Langevin dynamics. Since each component of x and z can be re-parametrzied with scaling and translation of standard Gaussian noise, without loss of generality, we assume a variable c (c can be a single latent variable in z or a single pixel in x) and write
c = µ+ σ .
Suppose we sample in the space with energy function f on c and step size η. The update for is
t+1 = t − η
2 ∇ f +
√ ηωt, ωt ∼ N (0, I).
Now we plug t+1 into the expression of c while noting that∇ f = σ∇cf . We obtain ct+1 = µ+ σ t+1 = µ+ σ ( t − η
2 ∇ f +
√ ηωt ) = µ+ σ t − σ2η
2 ∇cf +
√ ησ2ωt
= ct − σ2η
2 ∇cf +
√ ησ2ωt.
Therefore, we see that running Langevin dynamics in ( x, z)-space is equivalent to running Langevin dynamics in (x, z)-space with step size for each component of z and x adjusted by its variance. However, considering the high dimensionality of x and z, the step size adjustment is difficult to implement.
The analysis above only considers a variable individually. More importantly, our latent variable z in the prior follows block-wise auto-regressive Gaussian distributions, so the variance of each
component in zi depends on the value of z<i. We foresee that because of this dependency, using a fixed step size per component of z will not be effective, even when it is set differently for each component. In contrast, all the components in ( x, z)-space have a unit variance. Hence, a universal step size for all the variables in this space can be used.
To further provide empirical evidence that adjusting the step size for each variable is necessary, we try sampling directly in (x, z)-space without adjusting the step size (i.e., use a universal step size for all variables). Qualitative results are presented in Figure 5. We examine several choices for the step size and we cannot obtain high-quality samples.
In conclusion, the re-parameterization provides an easy implementation to adjust step size for each variable, and the adjustment is shown to be crucial to obtain good samples.
C EXTENSION TO TRAINING OBJECTIVE
In the first stage of training VAEBM, the VAE model is trained by maximizing the training data log-likelihood which corresponds to minimizing an upper bound on DKL(pd(x)||pθ(x)) w.r.t. θ. In the second stage, when we are training the EBM component, we use the VAE model to sample from the joint VAEBM by running the MCMC updates in the joint space of z and x. Ideally, we may want to bring pθ(x) closer to hψ,θ(x) in the second stage, because when pθ(x) = hψ,θ(x), we will not need the expensive updates for ψ. We can bring pθ(x) closer to hψ,θ(x) by minimizing DKL(pθ(x)||hψ,θ(x)) with respect to θ which was recently discussed in the context of an EBMinterpretation of GANs by Che et al. (2020). To do so, we assume the target distribution hψ,θ(x) is fixed and create a copy of θ, named θ′, and we update θ′ by the gradient:
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′Ex∼pθ′ (x) [Eψ(x)] (21)
In other words, one update step for θ′ that minimizes DKL(p′θ(x)||hψ,θ(x)) w.r.t. θ′ can be easily done by drawing samples from p′θ(x) and minimizing the energy-function w.r.t. θ
′. Note that this approach is similar to the generator update in training Wasserstein GANs (Arjovsky et al., 2017). The above KL objective will encourage pθ(x) to model dominants modes in hψ,θ(x). However, it may cause pθ(x) to drop modes.
C.1 DERIVATION
Our derivation largely follows Appendix A.2 of Che et al. (2020). Note that every time we update θ, we are actually taking the gradient w.r.t θ′, which can be viewed as a copy of θ and is initialized as θ. In particular, we should note that the θ in hψ,θ(x) is fixed. Therefore, we have
∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∇θ′ ∫ pθ′(x) [log pθ′(x)− log hψ,θ(x)] dx
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx
+ ∫ pθ′(x) [∇θ′ log pθ′(x)−∇θ′ log hψ,θ(x)] dx︸ ︷︷ ︸
=0
(22)
= ∫ [∇θ′pθ′(x)] [log pθ′(x)− log hψ,θ(x)] dx, (23)
where the second term in Eq. 22 is 0 because the log hψ,θ(x) does not depend on θ′ and the expectation of the score function is 0:∫
pθ′(x)∇θ′ log pθ′(x)dx = Ex∼pθ′ (x) [∇θ′ log pθ′(x)] = 0.
Recall that θ′ has the same value as θ before the update, so log pθ′(x)− log hψ,θ(x) = log [ pθ′(x)
pθ(x)e−Eψ(x)
] + logZψ,θ
= Eψ(x) + logZψ,θ. (24)
Plug Eq. 24 into Eq. 23, we have ∇θ′DKL(pθ′(x)||hψ,θ(x)) = ∫ ∇θ′pθ′(x) [Eψ(x) + logZψ,θ] dx
= ∇θ′Ex∼pθ′ (x) [Eψ(x)] , (25)
since ∫ ∇θ′pθ′(x) logZψ,θdx = ∇θ′ logZψ,θ ∫ pθ′(x)dx = ∇θ′ logZψ,θ = 0.
C.2 RESULTS
We train VAEBM with an additional loss term that updates the parameter θ to minimize DKL(pθ(x)||hψ,θ(x)) as explained above. Our experiment uses the same initial VAE as in Sec. 5.2, and details of the implementation are introduced in Appendix F. We obtain FID 14.0 and IS 8.05, which is similar to the results of plain VAEBM (FID 12.96 and IS 8.15). Therefore, we conclude that training the model by minimizingDKL(pθ(x)||hψ,θ(x)) does not improve the performance, and updating the decoder is not necessary. This is likely because the initial VAE is pulled as closely as possible to the data distribution already, which is also the target for the joint VAEBM hψ,θ(x).
D COMPARING LIKELIHOODS ON TRAIN AND TEST SET
In Figure 6, we plot a histogram of unnormalized log-likelihoods of 10k CIFAR-10 train set and test set images. We see that our model assigns similar likelihoods to both train and test set images. This indicates that VAEBM generalizes well to unseen data and covers modes in the training data well.
E IMPLEMENTATION DETAILS
In this section, we introduce the details of training and sampling from VAEBM.
NVAE: VAEBM uses NVAE as the pθ(x) component in the model. We train the NVAE with its official implementation3. We largely follow the default settings, with one major difference that we use a Gaussian decoder instead of a discrete logistic mixture decoder as in Vahdat & Kautz (2020). The reason for this is that we can run Langevin dynamics only with continuous variables. The number of latent variable groups for CIFAR-10, CelebA 64, LSUN Church 64 and CelebA HQ 256 are 30, 15, 15 and 20, respectively.
Network for energy function: We largely adopt the energy network structure for CIFAR-10 in Du & Mordatch (2019), and we increase the depth of the network for larger images. There are 2 major differences between our energy networks and the ones used in Du & Mordatch (2019): 1. we replace the LeakyReLU activations with Swish activations, as we found it improves training stability, and 2. we do not use spectral normalization (Miyato et al., 2018); instead, we use weight normalization with data-dependent initialization (Salimans & Kingma, 2016). The network structure for each dataset is presented in Table 7.
Training of energy function: We train the energy function by minimizing the negative log likelihood and an additional spectral regularization loss which penalizes the spectral norm of each convolutional layer in Eψ . The spectral regularization loss is also used in training NVAE, as we found
3https://github.com/NVlabs/NVAE
it helpful to regularize the sharpness of the energy network and better stabilize training. We use a coefficient 0.2 for the spectral regularization loss.
We summarize some key hyper-parameters we used to train VAEBM in Table 8.
On all datasets, we train VAEBM using the Adam optimizer (Kingma & Ba, 2015) and weight decay 3e−5. We use constant learning rates, shown in Table 8. Following Du & Mordatch (2019), we clip training gradients that are more than 3 standard deviations from the 2nd-order Adam parameters.
While persistent sampling using a sample replay buffer has little effect on CIFAR-10, we found it to be useful on large images such as CelebA HQ 256. When we do not use persistent sampling, we always initialize the LD chains with ( x, z), sampled from a standard Gaussian. When we use persistent sampling in training, we keep a sample replay buffer that only stores samples of z, while x is always initialized from a standard Gaussian. The size of the replay buffer is 10,000 for CIFAR10 and LSUN Church 64, and 8,000 for CelebA HQ 256. At every training iteration, we initialize the MCMC chains on z by drawing z from the replay buffer with probability p and from standard Gaussian with probability 1− p. For CIFAR-10 and LSUN Church 64, we linearly increase p from 0 to 0.6 in 5,000 training iterations, and for CelebA HQ 256, we linearly increase p from 0 to 0.6 in 3,000 training iterations. The settings of Langevin dynamics are presented in Table 8.
We do not explicitly set the number of training iterations. Instead, we follow Du & Mordatch (2019) to train the energy network until we cannot generate realistic samples anymore. This happens when the model overfits the training data and hence energies of negative samples are much larger than energies of training data. Typically, training takes around 25,000 iterations (or 16 epochs) on CIFAR-10, 20,000 iterations (or 3 epochs) on CelebA 64, 20,000 iterations (or 5 epochs) on LSUN Church 64, and 9,000 iterations (or 5 epochs) on CelebA HQ 256.
Test time sampling: After training the model, we generate samples for evaluation by running Langvin dynamics with ( x, z) initialized from standard Gaussian, regardless of whether persistent sampling is used in training or not. We run slightly longer LD chains than training to obtain the best sample quality. In particular, our reported values are obtained from running 16 steps of LD for CIFAR-10, 20 steps of LD for CelebA64 and LSUN Church 64, and 24 steps for CelebA HQ 256. The step sizes are the same as training step sizes.
In CelebA HQ 256 dataset, we optionally use low temperature initialization for better visual quality. To do this, we first draw samples from the VAE with low temperature and readjusted the BN statistics as introduced by Vahdat & Kautz (2020), and then initialize the MCMC chain by ( x, z) obtained by encoding the low-temperature samples using VAE’s encoder without readjusted BN statistics.
Evaluation metrics: We use the official implementations of FID4 and IS5. We compute IS using 50k CIFAR 10 samples, and we compute FID between 50k generated samples and training images, except for CelebA HQ 256 where we use 30k training images (the CelebA HQ dataset contains only 30k samples).
F SETTINGS FOR ABLATION STUDY
In this section, we present the details of ablation experiments in Sec. 5.2. Throughout ablation experiments, we use a smaller NVAE with 20 groups of latent variables trained on CIFAR-10. We use the same network architectures for the energy network as in Table 7, with potentially different
4https://github.com/bioinf-jku/TTUR 5https://github.com/openai/improved-gan/tree/master/inception_score
normalization techniques discussed below. We spent significant efforts on improving each method we compare against, and we report the settings that led to the best results.
WGAN initialized with NVAE decoder: We initialize the generator with the pre-trained NVAE decoder, and the discriminator is initialized by a CIFAR-10 energy network with random weights. We use spectral normalization and batch normalization in the discriminator as we found them necessary for convergence. We update the discriminator using the Adam optimizer with constant learning rate 5e−5, and update the generator using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. We train the generator and discriminator for 40k iterations, and we reach convergence of sample quality towards the end of training.
EBM on x, w/ or w/o initializing MCMC with NVAE samples: We train two EBMs on data space similar to Du & Mordatch (2019), where for one of them, we use the pre-trained NVAE to initialize the MCMC chains that draw samples during training. The setting for training these two EBMs are the same except for the initialization of MCMC. We use spectral normalization in the energy network and energy regularization in the training objective as done in Du & Mordatch (2019) because we found these modifications to improve performance. We train the energy function using the Adam optimizer with constant learning rate 1e−4. We train for 100k iterations, and we reach convergence of sample quality towards the end of training. During training, we draw samples from the model following the MCMC settings in Du & Mordatch (2019). In particular, we use persistent sampling and sample from the sample replay buffer with probability 0.95. We run 60 steps of Langevin dynamics to generate negative samples and we clip gradients to have individual value magnitudes of less than 0.01. We use a step size of 10 for each step of Langevin dynamics. For test time sampling, we generate samples by running 150 steps of LD with the same settings as during training.
VAEBM withDKL(pθ(x)||hψ,θ(x)) loss: We use the same network structure forEψ as in VAEBM. We find persistent sampling significantly hurts the performance in this case, possibly due to the fact that the decoder is updated and hence the initial samples from the decoder change throughout training. Therefore, we do not use persistent training. We train the energy function using the Adam optimizer with constant learning rate 5e−5. We draw negative samples by running 10 steps of LD with step size 8e−5. We update the decoder with the gradient in Eq. 21 using the Adam optimizer with initial learning rate 5e−6 and cosine decay schedule. For test time sampling, we run 15 steps of LD with step size 5e−6. VAEBM: The training of VAEBM in this section largely follows the settings described in Appendix E. We use the same energy network as for CIFAR-10, and we train using the Adam optimizer with constant learning rate 5e−5. Again, we found that the performance of VAEBM with or without persistent sampling is similar. We adopt persistent sampling in this section because it is faster. The setting for the buffer is the same as in Appendix E. We run 5 steps of LD with step size 8e−5 during training, and 15 steps of LD with the same step size in testing.
G QUALITATIVE RESULTS OF ABLATION STUDY
In Figure 7, we show qualitative samples from models corresponding to each item in Table 4, as well as samples generated by VAEBM with additional DKL(pθ(x)||hψ,θ(x)) loss.
H ADDITIONAL QUALITATIVE RESULTS
We present additional qualitative results in this section.
Additional samples and visualizations of MCMC on CIFAR-10 are in Figures 8 and 9, respectively.
Additional samples on CelebA 64 are in Figure 10.
Additional samples on LSUN Church 64 are in Figure 11. We visualize the effect of running MCMC by displaying sample pairs before and after MCMC in Figure 12.
Additional samples on CelebA HQ 256 generated by initializing VAE samples with temperature 0.7 are shown in Figure 13. Samples generated by initializing VAE samples with full temperature 1.0 are shown in Figure 14. We visualize the effect of running MCMC by displaying sample pairs
before and after MCMC in Figure 15. Note that the samples used to visualize MCMC are generated by initializing MCMC chains with VAE samples with full temperature 1.0.
I NEAREST NEIGHBORS
We show nearest neighbors in the training set with generated samples on CIFAR-10 (in Figure 16 and 17) and CelebA HQ 256 (in Figure 18 and 19). We observe that the nearest neighbors are significantly different from the samples, suggesting that our models generalize well.
J SETTINGS OF SAMPLING SPEED EXPERIMENT
We use the official implementation and checkpoints of NCSN at https://github.com/ ermongroup/ncsn. We run the experiments on a computer with a Titan RTX GPU. We use PyTorch 1.5.0 and CUDA 10.2. | 1. What is the focus of the paper regarding generative models?
2. What are the strengths of the proposed approach, particularly in combining two models?
3. Do you have any concerns about the training process and its potential optimality?
4. Can the proposed model compute likelihood, and how does it compare to other models in this regard?
5. Are there any additional experiments or comparisons that could further support the contributions of the paper? | Review | Review
The authors propose a generative model that is a combination (product) of a VAE and an EBM, where the goal of the EBM is to reduce the probability of out-of-manifold samples, which are typically generated by VAEs. The authors propose efficient training and sampling procedures, in which the VAE is trained first and during the EBM negative-phase, samples are drawn from the joint (x, z) VAE space using reparameterization. The method is shown to achieve high quality samples on several modern image datasets, good FID scores and mode coverage. Ablation studies show the contribution of the different elements.
This is, in my opinion, a very good work, which combines a novel and well-motivated idea with clear writing and extensive experimental evidence.
Some comments and questions:
Does the separate twos-stage training enable the model to reach the optimal point that can be reached in joint training, or is it an approximation? If its an approximation, I think it should be discussed or perhaps bounded.
Does the combined model allow computing the likelihood? Can it be evaluated and compared to other models in terms of bits/dimension (e.g. as in VAE or NVAE)?
It might be interesting (not something that I think is mandatory) to measure the NVAE log-likelihood of samples generated by the combined model compared to samples generated just by the NVAE.
To summarize: pros:
novelty
significance
experimental evidence
quality of writing
cons:
combining two separately trained models - perhaps sub-optimal
Update: I thanks the authors for their answers and revised version and keep my positive rating. |
ICLR | Title
From Nodes to Networks: Evolving Recurrent Neural Networks
Abstract
Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. Remarkably, this node did not perform well in another task, music modeling, but it was possible to evolve a different node that did, demonstrating that the approach discovers customized structure for each task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.
1 INTRODUCTION
In many areas of engineering design, the systems have become so complex that humans can no longer optimize them, and instead, automated methods are needed. This has been true in VLSI design for a long time, but it has also become compelling in software engineering: The idea in ”programming by optimization” is that humans should design only the framework and the details should be left for automated methods such as optimization (Hoos, 2012). Recently similar limitations have started to emerge in deep learning. The neural network architectures have grown so complex that humans can no longer optimize them; hyperparameters and even entire architectures are now optimized automatically through gradient descent (et al., 2016), Bayesian parameter optimization (Malkomes et al., 2015), reinforcement learning (Zoph & Le, 2016; Baker et al., 2016), and evolutionary computation (Miikkulainen et al., 2018; Real, 2017; Fernando, 2017). Improvements from such automated methods are significant: the structure of the network matters.
This paper shows that the same approach can be used to improve architectures that have been used essentially unchanged for decades. The case in point is the Long Short-Term Memory (LSTM) network Hochreiter & Schmidhuber (1997). It was originally proposed in 1992; with the vastly increased computational power, it has recently been shown a powerful approach for sequential tasks such as speech recognition, language understanding, language generation, and machine translation, in some cases improving performance 40% over traditional methods (Bahdanau et al., 2015). The basic LSTM structure has changed very little in this process, and thorough comparisons of variants concluded that there’s little to be gained by modifying it further (Klaus et al., 2014; Jozefowicz et al., 2015).
However, very recent studies on metalearning methods such as neural architecture search and evolutionary optimization have shown that LSTM performance can be improved by complexifying it further (Zoph & Le, 2016; Miikkulainen et al., 2018). This paper develops a new method along these lines, recognizing that a large search space where significantly more complex node structures
can be constructed could be beneficial. The method is based on a tree encoding of the node structure so that it can be efficiently searched using genetic programming. Indeed, the approach discovers significantly more complex structures than before, and they indeed perform significantly better: Performance in the standard language modeling benchmark, where the goal is to predict the next word in a large language corpus, is improved by 6 perplexity points over the standard LSTM (Zaremba et al., 2014), and 0.9 perplexity points over reinforcement-learning based neural architecture search (Zoph & Le, 2016).
These improvements are obtained by constructing a homogeneous layered network architecture from a single gated recurrent node design. A second innovation in this paper shows that further improvement can be obtained by constructing such networks from multiple different designs. As a first step, allocation of different kinds of LSTM nodes into slots in the network is shown to improve performance by another 0.5 perplexity points. This result suggests that further improvements are possible with more extensive network-level search.
A third contribution of this paper is to show that evolution of neural network architectures in general can be speeded up significantly by using an LSTM network to predict the performance of candidate neural networks. After training the candidate for a few epochs, such a Meta-LSTM network predicts what performance a fully trained network would have. That prediction can then be used as fitness for the candidate, speeding up evolution fourfold in these experiments. A fourth contribution is to encourage exploration by using an archive of already-explored areas. The effect is similar to that of novelty search, but does not require a separate novelty objective, simplifying the search.
Interestingly, when the recurrent node evolved for language modeling was applied to another task, music modeling, it did not perform well. However, it was possible to evolve another solution for that task that did. As a fifth contribution, the results in this paper demonstrate that it is not simply the added complexity in the nodes that matter, but that it is the right kind, i.e. complexity customized for each task.
Thus, evolutionary optimization of complex deep learning architectures is a promising approach that can yield significant improvements beyond human ability to do so.
2 BACKGROUND AND RELATED WORK
In recent years, LSTM-based recurrent networks have been used to achieve strong results in the supervised sequence learning problems such as in speech recognition [10] and machine translation (Bahdanau et al., 2015). Further techniques have been developed to improve performance of these models through ensembling (Zaremba et al., 2014), shared embeddings (Zilly et al., 2016) and dropouts (Gal, 2015).
In contrast, previous studies have shown that modifying the LSTM design itself did not provide any significant performance gains (Bayer et al., 2009; Cho et al., 2014; Jozefowicz et al., 2015). However, a recent paper from Zoph & Le (2016) showed that policy gradients can be used to train a LSTM network to find better LSTM designs. The network is rewarded based on the performance of
the designs it generates. While this approach can be used to create new designs that perform well, its exploration ability is limited (as described in more detail in Section 3.3). The setup detailed in Zoph & Le (2016) is used for comparison in this paper. In a subsequent paper Pham et al. (2018), the same policy gradient approach is used to discover new recurrent highway networks to achieve even better results.
Neuroevolution methods like NEAT (Stanley & Miikkulainen, 2002) are an alternative to policy gradient approaches, and have also been shown to be sucessful in the architecture search problem (Miikkulainen et al., 2018; Real, 2017). For instance, Cartesian genetic programming was recently used to achieve state of the art results in CIFAR-10 (Suganuma et al., 2017). Along similar lines, a tree based variant of genetic programming is used in this paper to evolve recurrent nodes. These trees can grow in structure and can be pruned as well, thus providing a flexible representation.
Novelty search is a particularly useful technique to increase exploration in evolutionary optimization (Lehman, 2012). Novelty is often cast as a secondary objective to be optimized. It allows searching in areas that do not yield immediate benefit in terms of fitness, but make it possible to discover stepping stones that can be combined to form better solutions later. This paper proposes an alternative approach: keeping an archive of areas already visited and exploited, achieving similar goals without additional objectives to optimize.
Most architecture search methods reduce compute time by evaluating individuals only after partial training (Suganuma et al., 2017; Real, 2017). This paper proposes a meta LSTM framework to predict final network performance based on partial training results.
These techniques are described in detail in the next section.
3 METHODS
Evolving recurrent neural networks is an interesting problem because it requires searching the architecture of both the node and the network. As shown by recent research (Zoph & Le, 2016) (Zilly et al., 2016), the recurrent node in itself can be considered a deep network. In this paper, Genetic Programming (GP) is used to evolve such node architectures. In the first experiment, the overall network architecture is fixed i.e. constructed by repeating a single evolved node to form a layer (Figure1(b)). In the second, it is evolved by combining several different types of nodes into a layer (Figure1(c)). In the future more complex coevolution approaches are also possible.
Evaluating the evolved node and network is costly. Training the network for 40 epochs takes two hours on a 1080 NVIDIA GPU. A sequence to sequence model called meta-LSTM is developed to speed up evaluation. Following sections describe these methods in detail.
3.1 GENETIC PROGRAMMING FOR RECURRENT NODES
As shown in Figure1(a), a recurrent node can be represented as a tree structure, and GP can therefore be used to evolve it. However, standard GP may not be sufficiently powerful to do it. In particular, it does not maintain sufficient diversity in the population. Similar to the GP-NEAT approach by Trujillo et al. Tujillo et al. (2015), it can be augmented with ideas from NEAT speciation.
A recurrent node usually has two types of outputs. The first, denoted by symbol h in Figure1 (a), is the main recurrent output. The second, often denoted by c, is the native memory cell output. The h value is weighted and fed to three locations: (1) to the higher layer of the network at the same time step, (2) to other nodes in the network at the next time step, and (3) to the node itself at the next time step. Before propagation, h are combined with weighted activations from the previous layer, such as input word embeddings in language modeling, to generate eight node inputs (termed as base eight by Zoph & Le (2016)). In comparison, the standard LSTM node has four inputs (see Figure5(a)). The native memory cell output is fed back, without weighting, only to the node itself at the next time step. The connections within a recurrent cell are not trainable by backpropagation and they all carry a fixed weight of 1.0.
Thus, even without an explicit recurrent loop, the recurrent node can be represented as a tree. There are two type of elements in the tree: (1) linear activations with arity two (add, multiply), and (2) non-linear activations with arity one (tanh, sigmoid, relu, sin, cos).
There are three kind of mutation operations in the experiments: (1) Mutation to randomly replace an element with an element of the same type, (2) Mutation to randomly inserts a new branch at a random position in the tree. The subtree at the chosen position is used as child node of the newly created subtree. (3) Mutation to shrink the tree by choosing a branch randomly and replacing it with one of the branch’s arguments (also randomly chosen).
One limitation of standard tree is that it can have only a single output: the root. This problem can be overcome by using a modified representation of a tree that consists of Modi outputs (Zhang & Zhang, 2004). In this approach, with some probability p (termed modirate), non-root nodes can be connected to any of the possible outputs. A higher modi rate would lead to many sub-tree nodes connected to different outputs. A node is assigned modi (i.e. connected to memory cell outputs c or d) only if its sub-tree has a path from native memory cell inputs.
This representation allows searching for a wide range of recurrent node structures with GP.
3.2 SPECIATION AND CROSSOVER
One-point crossover is the most common type of crossover in GP. However, since it does not take into account the tree structure, it can often be destructive. An alternative approach, called homologous crossover (Francone et al., 1999), is designed to avoid this problem by crossing over the common regions in the tree. Similar tree structures in the population can be grouped into species, as is often done in NEAT (Tujillo et al., 2015). Speciation achieves two objectives: (1) it makes homologous crossover effective, since individuals within species are similar, and (2) it helps keep the population diverse, since selection is carried out separately in each species. A tree distance metric proposed by Tujillo et al. (2015) is used to determine how similar the trees are (see A.1 for detail).
In most GP implementations, there is a concept of the left and the right branch. A key extension in this paper is that the tree distance is computed by comparing trees after all possible tree rotations, i.e. swaps of the left and the right branch. Without such a comprehensive tree analysis, two trees that are mirror images of each other might end up into different species. This approach reduces the search space by not searching for redundant trees. It also ensures that crossover can be truly homologous Figure2 (a).
The structural mutations in GP, i.e. insert and shrink, can lead to recycling of the same strcuture across multiple generations. In order to avoid such repetitions, an archive called Hall of Shame
is maintained during evolution (Figure2(b)). This archive consists of individuals representative of stagnated species, i.e. regions in the architecture space that have already been discovered by evolution but are no longer actively searched. During reproduction, new offsprings are repeatedly mutated until they result in an individual that does not belong to Hall of Shame. Mutations that lead to Hall of Shame are not discarded, but instead used as stepping stones to generate better individuals. Such memory based evolution is similar to novelty search. However, unlike novelty search (Lehman, 2012), there is no additional fitness objective, simply an archive.
3.3 SEARCH SPACE: NODE
GP evolution of recurrent nodes starts with a simple fully connected tree. During the course of evolution, the tree size increases due to insert mutations and decreases due to shrink mutations. The maximum possible height of the tree is fixed at 15. However, there is no restriction on the maximum width of the tree.
The search space for the nodes is more varied and several orders of magnitude larger than in previous approaches. More specifically, the main differences from the state-of-the-art Neural Architecture Search (NAS) (Zoph & Le, 2016) are: (1) NAS searches for trees of fixed height 10 layers deep; GP searches for trees with height varying between six (the size of fully connected simple tree) and 15 (a constraint added to GP). (2) Unlike in NAS, different leaf elements can occur at varying depths in GP. (3) NAS adds several constraint to the tree structure. For example, a linear element in the tree is always followed by a non-linear element. GP prevents only consecutive non-linearities (they would cause loss of information since the connections within a cell are not weighted). (4) In NAS, inputs to the tree are used only once; in GP, the inputs can be used multiple times within a node.
Most gated recurrent node architectures consist of a single native memory cell (denoted by output c in Figure1(a)). This memory cell is the main reason why LSTMs perform better than simple RNNs. One key innovation introduced in this paper is to allow multiple native memory cells within a node. The memory cell output is fed back as input in the next time step without any modification, i.e. this recurrent loop is essentially a skip connection. Adding another memory cell in the node therefore does not effect the number of trainable parameters: It only adds to the representational power of the node.
3.4 SEARCH SPACE: NETWORK
Standard recurrent networks consist of layers formed by repetition of a single type of node. However, the search for better recurrent nodes through evolution often results in solutions with similar task performance but very different structure. Forming a recurrent layer by combining such diverse node solutions is potentially a powerful idea, related to the idea of ensembling, where different models are combined together to solve a task better.
In this paper, such heterogenous recurrent networks are constructed by combining diverse evolved nodes into a layer (Figure1(c)). A candidate population is created that consists of top-performing evolved nodes that are structurally very different from other nodes. The structure difference is calculated using the tree distance formula detailed previously. Each heterogenous layer is constructed by selecting nodes randomly from the candidate population. Each node is repeated 20 times in a layer; thus, if the layer size is e.g. 100, it can consist of five different node types, each of cardinality 20.
The random search is an initial test of this idea. As described in Section 5, in the future the idea is to search for such heterogenous recurrent networks using a genetic algorithm as well.
3.5 META-LSTM FOR FITNESS PREDICTION
In both node and network architecture search, it takes about two hours to fully train a network until 40 epochs. With sufficient computing power it is possible to do it: for instance Zoph & Le (2016) used 800 GPUs for training multiple such solutions in parallel. However, if training time could be shortened, no matter what resources are available, those resources could be used better.
A common strategy for such situations is early stopping (Suganuma et al., 2017), i.e. selecting networks based on partial training. For example in case of recurrent networks, the training time
would be cut down to one fourth if the best network could be picked based on the 10th epoch validation loss instead of 40th. Figure3 demonstrates that this is not a good strategy, however. Networks that train faster in the initial epochs often end up with a higher final loss.
To overcome costly evaluation and to speed up evolution, a Meta-LSTM framework for fitness prediction was developed. Meta-LSTM is a sequence to sequence model (Sutskever et al., 2014) that consists of an encoder RNN and a decoder RNN (see Figure4(a)). Validation perplexity of the first 10 epochs is provided as sequential input to the encoder, and the decoder is trained to predict the validation loss at epoch 40 (show figure). Training data for these models is generated by fully training sample networks (i.e. until 40 epochs). The loss is the mean absolute error percentage at epoch 40. This error measure is used instead of mean squared error because it is unaffected by the magnitude of perplexity (poor networks can have very large perplexity values that overwhelm MSE). The hyperparameter values of the Meta-LSTM were selected based on its performance in the validation dataset. The best configuration that achieved an error rate of 3% includes an ensemble of two seq2seq models: one with a decoder length of 30 and the other with a decoder length of 1 (figure).
Recent approaches to network performance prediction include Bayesian modeling (Klein et al. (2017)) and regression curve fitting (Baker et al., 2017). The learning curves for which the above methods are deployed are much simpler as compared to the learning curves of structures discovered by evolution (see Appendix). Note that Meta-LSTM is trained separately and only deployed for use during evolution. Thus, networks can be partially trained with a 4× speedup, and assessed with near-equal accuracy as with full training.
4 EXPERIMENTS
Neural architectures were constructed for the language modeling task, using Meta-LSTM as the predictor of training performance. In the first experiment, homogeneous networks were constructed from single evolved nodes, and in the second, heterogeneous networks that consisted of multiple evolved nodes.
4.1 NATURAL LANGUAGE MODELING TASK
Experiments focused on the task of predicting the next word in the Penn Tree Bank corpus (PTB), a well-known benchmark for language modeling (Marcus et al., 1993). LSTM architectures in general tend to do well in this task, and improving them is difficult (Zaremba et al., 2014; Jozefowicz et al., 2015; Gal, 2015). The dataset consists of 929k training words, 73k validation words, and 82k test
words, with a vocabulary of 10k words. During training, successive minibatches of size 20 are used to traverse the training set sequentially.
4.2 MUSIC MODELING TASK
Music consists of a sequence of notes that often exhibit temporal dependence. Predicting future notes based on the previous notes can therefore be treated as a sequence prediction problem. Similar to natural language, musical structure can be captured using a music language model (MLM). Just like natural language models form an important component of speech recognition systems, polyphonic music language model is an integral part of Automatic music transcription (AMT). AMT is defined as the problem of extracting a symbolic representation from music signals, usually in the form of a time-pitch representation called piano-roll, or in a MIDI-like representation.
MLM predicts the probability distribution of the notes in the next time step. Multiple notes can be turned on at a given time step for playing chords. The input is a piano-roll representation, in the form of an 88× T matrixM , where T is the number of timesteps, and 88 corresponds to the number of keys on a piano, between MIDI notes A0 and C8. M is binary, such thatM [p, t] = 1 if and only if the pitch p is active at timestep t. In particular, held notes and repeated notes are not differentiated. The output is of the same form, except it only has T − 1 timesteps (the first timestep cannot be predicted since there is no previous information).
The dataset piano-midi.de is used as the benchmark data. This dataset holds 307 pieces of classical piano music from various composers. It was made by manually editing the velocities and the tempo curve of quantized MIDI files in order to give them a natural interpretation and feeling ( Ycart & Benetos (2017)). MIDI files encode explicit timing, pitch, velocity and instrumental information of the musical score.
4.3 NETWORK TRAINING DETAILS
During evolution, each network has two layers of 540 units each, and is unrolled for 35 steps. The hidden states are initialized to zero; the final hidden states of the current minibatch are used as the initial hidden states of the subsequent minibatch. The dropout rate is 0.4 for feedforward connections and 0.15 for recurrent connections (Gal, 2015). The network weights have L2 penalty of 0.0001. The evolved networks are trained for 10 epochs with a learning rate of 1; after six epochs the learning rate is decreased by a factor of 0.9 after each epoch. The norm of the gradients (normalized by
minibatch size) is clipped at 10. Training a network for 10 epochs takes about 30 minutes on an NVIDIA 1080 GPU. The following experiments were conducted on 40 such GPUs.
The Meta-LSTM consists of two layers, 40 nodes each. To generate training data for it, 1000 samples from a preliminary node evolution experiment was obtained, representing a sampling of designs that evolution discovers. Each of these sample networks was trained for 40 epochs with the language modeling training set; the perplexity on the language modeling validation set was measured in the first 10 epochs, and at 40 epochs. The Meta-LSTM network was then trained to predict the perplexity at 40 epochs, given a sequence of perplexity during the first 10 epochs as input. A validation set of 500 further networks was used to decide when to stop training the Meta-LSTM, and its accuracy measured with another 500 networks.
In line with Meta-LSTM training, during evolution each candidate is trained for 10 epochs, and tested on the validation set at each epoch. The sequence of such validation perplexity values is fed into the trained meta-LSTM model to obtain its predicted perplexity at epoch 40; this prediction is then used as the fitness for that candidate. The individual with the best fitness after 30 generations is scaled to a larger network consisting of 740 nodes in each layer. This setting matches the 32 Million parameter configuration used by Zoph & Le (2016). A grid search over drop-out rates is carried out to fine-tune the model. Its performance after 180 epochs of training is reported as the final result (Table 1)
4.4 EXPERIMENT 1: EVOLUTION OF RECURRENT NODES
A population of size 100 was evolved for 30 generations with a crossover rate of 0.6, insert and shrink mutation probability of 0.6 and 0.3, respectively, and modi rate (i.e. the probability that a newly added node is connected to memory cell output) of 0.3. A compatibility threshold of 0.3 was used for speciation; species is marked stagnated and added to the Hall of Shame if the best fitness among its candidates does not improve in four generations. Each node is allowed to have three outputs: one main recurrent output (h) and two native memory cell outputs (c and d).
The best evolved node is shown Figure5. The evolved node reuses inputs as well as utilize the extra memory cell pathways. As shown in Table 1, the evolved node (called GP Node evolution in the table) achieves a test performance of 68.2 for 20 Million parameter configuration on Penn Tree Bank. This is 2.8 perplexity points better than the test performance of the node discovered by NAS (Zoph(2016) in the table) in the same configuration. Evolved node also outperforms NAS in the 32 Million configuration (68.1 v/s. 66.5). Recent work has shown that sharing input and output embedding weight matrices of neural network language models improves performance (Press & Wolf, 2016). The experimental results obtained after including this method are marked as shared embeddings in Table 1.
It is also important to understand the impact of using meta LSTM in evolution. For this purpose, an additional evolution experiment was conducted, where each individual was assigned a fitness equal to its 10th epoch validation perplexity. As evolution progressed, in each generation, the best individual was trained fully till epoch 40. Similarly, the best individual from a evolution experiment with meta LSTM enabled was fully trained. The epoch 40 validation perplexity in these two cases has been plotted in Figure4(b). This figure demonstrates that individuals that are selected based upon meta LSTM prediction perform better than the ones selected using only partial training.
4.5 EXPERIMENT 2: HETEROGENEOUS RECURRENT NETWORKS
Top 10% of the population from 10 runs of Experiment 1 was collected into a pool 100 nodes. Out of these, 20 that were the most diverse, i.e. had the largest tree distance from the others, were selected for constructing heterogeneous layers (as shown in Figure1(c)). Nodes were chosen from this pool randomly to form 2000 such networks. Meta-LSTM was again used to speed up evaluation.
After hyperparameter tuning, the best network (for 25 Million parameter configuration )achieved a perplexity of 62.2, i.e. 0.8 better than the homogeneous network constructed from the best evolved node. This network is also 0.7 perplexity point better than the best NAS network double its size (54 Million parameters). Interestingly, best heterogeneous network was also found to be more robust to hyperparameter changes than the homogeneous network. This result suggests that diversity not only
improves performance, but also adds flexibility to the internal representations. The heterogeneous network approach therefore forms a promising foundation for future work, as discussed next.
4.6 EXPERIMENT 3: MUSIC MODELING
The piano-midi.de dataset is divided into train (60%), test (20%) and validation (20%) sets. The music model consists of a single recurrent layer of width 128. The input and output layers are 88 wide each. The network is trained for 50 epochs with Adam at a learning rate of 0.01. The network is trained by minimizing cross entropy between the output of the network and the ground truth. For evaluation, F1 score is computed on the test data. F1 score is the harmonic mean of precision and recall (higher is better). Since the network is smaller, regularization is not required.
Note, this setup is similar to that of Ycart & Benetos (2017). The goal of this experiment is not to achieve state-of-the-art results but to perform apples-to-apples comparison between LSTM nodes and evolved nodes (discovered for language) in a new domain i.e. music.
In this transfer experiment, three networks were constructed: the first with LSTM nodes, the second with NAS nodes, and the third with evolved nodes. All the three networks were trained under the same setting as described in the previous section. The F1 score of each of the three models is shown in Table 2. LSTM nodes outperform both NAS and evolved nodes. This result is interesting because both NAS and evolved nodes significantly outperformed LSTM nodes in the language-modeling task. This result suggests that NAS and evolved nodes are custom solution for a specific domain, and do not necessarily transfer to other domains.
However, the framework developed for evolving recurrent nodes for natural language can be transferred to the music domain as well. The setup is the same i.e. at each generation a population of recurrent nodes represented as trees will be evaluated for their performance in the music domain. The validation performance of the network constructed from the respective tree node will be used as the node fitness. The performance measure of the network in music domain is the F1 score, therefore, it is used as the network fitness value.
The evolution parameters are the same as those used for language modeling. Meta-LSTM is not used for this evolution experiment because the run-time of each network is relatively small (¡ 600 seconds). The results from evolving custom node for music are shown in Table 2. The custom node (GP Evolution (Music)) achieves an improvement of five points in F1 score over LSTM (Figure 6). Thus, evolution was able to discover custom structure for the music modeling domain as welland it was different from structure in the language domain.
5 DISCUSSION AND FUTURE WORK
The experiments in this paper demonstrate how evolutionary optimization can discover improvements to designs that have been essentially unchanged for 25 years. Because it is a populationbased method, it can harness more extensive exploration than other meta-learning techniques such as reinforcement learning, Bayesian parameter optimization, and gradient descent. It is therefore in a position to discover novel, innovative solutions that are difficult to develop by hand or through gradual improvement. Remarkably, the node that performed well in language modeling performed poorly in music modeling, but evolution was able to discover a different node that performed well in music. Apparently, the approach discovers regularities in each task and develops node structures that take advantage of them, thus customizing the nodes separately for each domain. Analyzing what those regularities are and how the structures encode them is an interesting direction of future work.
The GP-NEAT evolutionary search method in this paper is run in the same search space used by NAS (Zoph & Le, 2016), resulting in significant improvements. In a recent paper (Pham et al., 2018), the NAS search space was extended to include recurrent highway connections as well, improving the results further. An interesting direction of future work is thus to extend the GP-NEAT search space in a similar manner; similar improvements should result.
The current experiments focused on optimizing the structure of the gated recurrent nodes, cloning them into a fixed layered architecture to form the actual network. The simple approach of forming heterogeneous layers by choosing from a set of different nodes was shown to improve the networks further. A compelling next step is thus to evolve the network architecture as well, and further, coevolve it together with the LSTM nodes (Miikkulainen et al., 2018).
6 CONCLUSION
Evolutionary optimization of LSTM nodes can be used to discover new variants that perform significantly better than the original 25-year old design. The tree-based encoding and genetic programming approach makes it possible to explore larger design spaces efficiently, resulting in structures that are more complex and more powerful than those discovered by hand or through reinforcement-learning based neural architecture search. Further, these structures are customized to each specific domain. The approach can be further enhanced by optimizing the network level as well, in addition to the node structure, by training an LSTM network to estimate the final performance of candidates instead of having to train them fully, and by encouraging novelty through an archive. Evolutionary neural architecture search is therefore a promising approach to extending the abilities of deep learning networks to ever more challenging tasks.
A APPENDIX
A.1 TREE DISTANCE
δ(Ti, Tj) = β Ni,j − 2nSi,j Ni,j − 2 + (1− β)Di,j − 2dSi,j Di,j − 2 , (1)
where:
nTx = number of nodes in GP tree Tx,
dTx = depth of GP tree Tx,
Si,j = shared tree between Ti and Tj ,
Ni,j = nTi + nTj ,
Di,j = dTi + dTj ,
β ∈ [0, 1], δ ∈ [0, 1].
On the right-hand side of Equation 1, the first term measures the difference with respect to size, while the second term measures the difference in depth. Thus, setting β = 0.5 gives an equal importance to size and depth. Two trees will have a distance of zero if their structure is the same (irrespective of the actual element types).
A.2 EVOLVED SOLUTIONS | 1. What is the main contribution of the paper in the field of neural architecture search?
2. What are the strengths of the proposed approach, particularly in terms of using a genetic algorithm and LSTM-based sequence-to-sequence framework?
3. What are the weaknesses of the paper regarding its experimental comparisons and absence of ablation studies?
4. How does the reviewer assess the clarity and quality of the paper's content? | Review | Review
A genetic algorithm is used to do an evolutionary architecture search to find better tree-like architectures with multiple memory cells and recurrent paths. To speed up search, an LSTM based seq2seq framework is also developed that can predict the final performance of the child model based on partial training results.
The algorithms and intuitions based on novelty search are interesting and there are improvements over baseline NAS model with the same architecture search space.
Although, the experiments are not compared against latest architectures and best results. For example on PTB, there are new architectures such as those created by ENAS that result in much lower perplexity than best reported in Table 1, for the same parameter size. While you have mentioned ENAS in the related work, the lack of a comparison makes it hard to evaluate the true benefit if this work compared with existing literature.
There is no clear abolition study for the Meta-LSTM idea. Figure 4 provides some insights but it'd be good if some experiments were done to show clear wins over baseline methods that do not employ performance prediction.
There are many typos and missing reference in the paper that needs to be fixed. |
ICLR | Title
From Nodes to Networks: Evolving Recurrent Neural Networks
Abstract
Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. Remarkably, this node did not perform well in another task, music modeling, but it was possible to evolve a different node that did, demonstrating that the approach discovers customized structure for each task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.
1 INTRODUCTION
In many areas of engineering design, the systems have become so complex that humans can no longer optimize them, and instead, automated methods are needed. This has been true in VLSI design for a long time, but it has also become compelling in software engineering: The idea in ”programming by optimization” is that humans should design only the framework and the details should be left for automated methods such as optimization (Hoos, 2012). Recently similar limitations have started to emerge in deep learning. The neural network architectures have grown so complex that humans can no longer optimize them; hyperparameters and even entire architectures are now optimized automatically through gradient descent (et al., 2016), Bayesian parameter optimization (Malkomes et al., 2015), reinforcement learning (Zoph & Le, 2016; Baker et al., 2016), and evolutionary computation (Miikkulainen et al., 2018; Real, 2017; Fernando, 2017). Improvements from such automated methods are significant: the structure of the network matters.
This paper shows that the same approach can be used to improve architectures that have been used essentially unchanged for decades. The case in point is the Long Short-Term Memory (LSTM) network Hochreiter & Schmidhuber (1997). It was originally proposed in 1992; with the vastly increased computational power, it has recently been shown a powerful approach for sequential tasks such as speech recognition, language understanding, language generation, and machine translation, in some cases improving performance 40% over traditional methods (Bahdanau et al., 2015). The basic LSTM structure has changed very little in this process, and thorough comparisons of variants concluded that there’s little to be gained by modifying it further (Klaus et al., 2014; Jozefowicz et al., 2015).
However, very recent studies on metalearning methods such as neural architecture search and evolutionary optimization have shown that LSTM performance can be improved by complexifying it further (Zoph & Le, 2016; Miikkulainen et al., 2018). This paper develops a new method along these lines, recognizing that a large search space where significantly more complex node structures
can be constructed could be beneficial. The method is based on a tree encoding of the node structure so that it can be efficiently searched using genetic programming. Indeed, the approach discovers significantly more complex structures than before, and they indeed perform significantly better: Performance in the standard language modeling benchmark, where the goal is to predict the next word in a large language corpus, is improved by 6 perplexity points over the standard LSTM (Zaremba et al., 2014), and 0.9 perplexity points over reinforcement-learning based neural architecture search (Zoph & Le, 2016).
These improvements are obtained by constructing a homogeneous layered network architecture from a single gated recurrent node design. A second innovation in this paper shows that further improvement can be obtained by constructing such networks from multiple different designs. As a first step, allocation of different kinds of LSTM nodes into slots in the network is shown to improve performance by another 0.5 perplexity points. This result suggests that further improvements are possible with more extensive network-level search.
A third contribution of this paper is to show that evolution of neural network architectures in general can be speeded up significantly by using an LSTM network to predict the performance of candidate neural networks. After training the candidate for a few epochs, such a Meta-LSTM network predicts what performance a fully trained network would have. That prediction can then be used as fitness for the candidate, speeding up evolution fourfold in these experiments. A fourth contribution is to encourage exploration by using an archive of already-explored areas. The effect is similar to that of novelty search, but does not require a separate novelty objective, simplifying the search.
Interestingly, when the recurrent node evolved for language modeling was applied to another task, music modeling, it did not perform well. However, it was possible to evolve another solution for that task that did. As a fifth contribution, the results in this paper demonstrate that it is not simply the added complexity in the nodes that matter, but that it is the right kind, i.e. complexity customized for each task.
Thus, evolutionary optimization of complex deep learning architectures is a promising approach that can yield significant improvements beyond human ability to do so.
2 BACKGROUND AND RELATED WORK
In recent years, LSTM-based recurrent networks have been used to achieve strong results in the supervised sequence learning problems such as in speech recognition [10] and machine translation (Bahdanau et al., 2015). Further techniques have been developed to improve performance of these models through ensembling (Zaremba et al., 2014), shared embeddings (Zilly et al., 2016) and dropouts (Gal, 2015).
In contrast, previous studies have shown that modifying the LSTM design itself did not provide any significant performance gains (Bayer et al., 2009; Cho et al., 2014; Jozefowicz et al., 2015). However, a recent paper from Zoph & Le (2016) showed that policy gradients can be used to train a LSTM network to find better LSTM designs. The network is rewarded based on the performance of
the designs it generates. While this approach can be used to create new designs that perform well, its exploration ability is limited (as described in more detail in Section 3.3). The setup detailed in Zoph & Le (2016) is used for comparison in this paper. In a subsequent paper Pham et al. (2018), the same policy gradient approach is used to discover new recurrent highway networks to achieve even better results.
Neuroevolution methods like NEAT (Stanley & Miikkulainen, 2002) are an alternative to policy gradient approaches, and have also been shown to be sucessful in the architecture search problem (Miikkulainen et al., 2018; Real, 2017). For instance, Cartesian genetic programming was recently used to achieve state of the art results in CIFAR-10 (Suganuma et al., 2017). Along similar lines, a tree based variant of genetic programming is used in this paper to evolve recurrent nodes. These trees can grow in structure and can be pruned as well, thus providing a flexible representation.
Novelty search is a particularly useful technique to increase exploration in evolutionary optimization (Lehman, 2012). Novelty is often cast as a secondary objective to be optimized. It allows searching in areas that do not yield immediate benefit in terms of fitness, but make it possible to discover stepping stones that can be combined to form better solutions later. This paper proposes an alternative approach: keeping an archive of areas already visited and exploited, achieving similar goals without additional objectives to optimize.
Most architecture search methods reduce compute time by evaluating individuals only after partial training (Suganuma et al., 2017; Real, 2017). This paper proposes a meta LSTM framework to predict final network performance based on partial training results.
These techniques are described in detail in the next section.
3 METHODS
Evolving recurrent neural networks is an interesting problem because it requires searching the architecture of both the node and the network. As shown by recent research (Zoph & Le, 2016) (Zilly et al., 2016), the recurrent node in itself can be considered a deep network. In this paper, Genetic Programming (GP) is used to evolve such node architectures. In the first experiment, the overall network architecture is fixed i.e. constructed by repeating a single evolved node to form a layer (Figure1(b)). In the second, it is evolved by combining several different types of nodes into a layer (Figure1(c)). In the future more complex coevolution approaches are also possible.
Evaluating the evolved node and network is costly. Training the network for 40 epochs takes two hours on a 1080 NVIDIA GPU. A sequence to sequence model called meta-LSTM is developed to speed up evaluation. Following sections describe these methods in detail.
3.1 GENETIC PROGRAMMING FOR RECURRENT NODES
As shown in Figure1(a), a recurrent node can be represented as a tree structure, and GP can therefore be used to evolve it. However, standard GP may not be sufficiently powerful to do it. In particular, it does not maintain sufficient diversity in the population. Similar to the GP-NEAT approach by Trujillo et al. Tujillo et al. (2015), it can be augmented with ideas from NEAT speciation.
A recurrent node usually has two types of outputs. The first, denoted by symbol h in Figure1 (a), is the main recurrent output. The second, often denoted by c, is the native memory cell output. The h value is weighted and fed to three locations: (1) to the higher layer of the network at the same time step, (2) to other nodes in the network at the next time step, and (3) to the node itself at the next time step. Before propagation, h are combined with weighted activations from the previous layer, such as input word embeddings in language modeling, to generate eight node inputs (termed as base eight by Zoph & Le (2016)). In comparison, the standard LSTM node has four inputs (see Figure5(a)). The native memory cell output is fed back, without weighting, only to the node itself at the next time step. The connections within a recurrent cell are not trainable by backpropagation and they all carry a fixed weight of 1.0.
Thus, even without an explicit recurrent loop, the recurrent node can be represented as a tree. There are two type of elements in the tree: (1) linear activations with arity two (add, multiply), and (2) non-linear activations with arity one (tanh, sigmoid, relu, sin, cos).
There are three kind of mutation operations in the experiments: (1) Mutation to randomly replace an element with an element of the same type, (2) Mutation to randomly inserts a new branch at a random position in the tree. The subtree at the chosen position is used as child node of the newly created subtree. (3) Mutation to shrink the tree by choosing a branch randomly and replacing it with one of the branch’s arguments (also randomly chosen).
One limitation of standard tree is that it can have only a single output: the root. This problem can be overcome by using a modified representation of a tree that consists of Modi outputs (Zhang & Zhang, 2004). In this approach, with some probability p (termed modirate), non-root nodes can be connected to any of the possible outputs. A higher modi rate would lead to many sub-tree nodes connected to different outputs. A node is assigned modi (i.e. connected to memory cell outputs c or d) only if its sub-tree has a path from native memory cell inputs.
This representation allows searching for a wide range of recurrent node structures with GP.
3.2 SPECIATION AND CROSSOVER
One-point crossover is the most common type of crossover in GP. However, since it does not take into account the tree structure, it can often be destructive. An alternative approach, called homologous crossover (Francone et al., 1999), is designed to avoid this problem by crossing over the common regions in the tree. Similar tree structures in the population can be grouped into species, as is often done in NEAT (Tujillo et al., 2015). Speciation achieves two objectives: (1) it makes homologous crossover effective, since individuals within species are similar, and (2) it helps keep the population diverse, since selection is carried out separately in each species. A tree distance metric proposed by Tujillo et al. (2015) is used to determine how similar the trees are (see A.1 for detail).
In most GP implementations, there is a concept of the left and the right branch. A key extension in this paper is that the tree distance is computed by comparing trees after all possible tree rotations, i.e. swaps of the left and the right branch. Without such a comprehensive tree analysis, two trees that are mirror images of each other might end up into different species. This approach reduces the search space by not searching for redundant trees. It also ensures that crossover can be truly homologous Figure2 (a).
The structural mutations in GP, i.e. insert and shrink, can lead to recycling of the same strcuture across multiple generations. In order to avoid such repetitions, an archive called Hall of Shame
is maintained during evolution (Figure2(b)). This archive consists of individuals representative of stagnated species, i.e. regions in the architecture space that have already been discovered by evolution but are no longer actively searched. During reproduction, new offsprings are repeatedly mutated until they result in an individual that does not belong to Hall of Shame. Mutations that lead to Hall of Shame are not discarded, but instead used as stepping stones to generate better individuals. Such memory based evolution is similar to novelty search. However, unlike novelty search (Lehman, 2012), there is no additional fitness objective, simply an archive.
3.3 SEARCH SPACE: NODE
GP evolution of recurrent nodes starts with a simple fully connected tree. During the course of evolution, the tree size increases due to insert mutations and decreases due to shrink mutations. The maximum possible height of the tree is fixed at 15. However, there is no restriction on the maximum width of the tree.
The search space for the nodes is more varied and several orders of magnitude larger than in previous approaches. More specifically, the main differences from the state-of-the-art Neural Architecture Search (NAS) (Zoph & Le, 2016) are: (1) NAS searches for trees of fixed height 10 layers deep; GP searches for trees with height varying between six (the size of fully connected simple tree) and 15 (a constraint added to GP). (2) Unlike in NAS, different leaf elements can occur at varying depths in GP. (3) NAS adds several constraint to the tree structure. For example, a linear element in the tree is always followed by a non-linear element. GP prevents only consecutive non-linearities (they would cause loss of information since the connections within a cell are not weighted). (4) In NAS, inputs to the tree are used only once; in GP, the inputs can be used multiple times within a node.
Most gated recurrent node architectures consist of a single native memory cell (denoted by output c in Figure1(a)). This memory cell is the main reason why LSTMs perform better than simple RNNs. One key innovation introduced in this paper is to allow multiple native memory cells within a node. The memory cell output is fed back as input in the next time step without any modification, i.e. this recurrent loop is essentially a skip connection. Adding another memory cell in the node therefore does not effect the number of trainable parameters: It only adds to the representational power of the node.
3.4 SEARCH SPACE: NETWORK
Standard recurrent networks consist of layers formed by repetition of a single type of node. However, the search for better recurrent nodes through evolution often results in solutions with similar task performance but very different structure. Forming a recurrent layer by combining such diverse node solutions is potentially a powerful idea, related to the idea of ensembling, where different models are combined together to solve a task better.
In this paper, such heterogenous recurrent networks are constructed by combining diverse evolved nodes into a layer (Figure1(c)). A candidate population is created that consists of top-performing evolved nodes that are structurally very different from other nodes. The structure difference is calculated using the tree distance formula detailed previously. Each heterogenous layer is constructed by selecting nodes randomly from the candidate population. Each node is repeated 20 times in a layer; thus, if the layer size is e.g. 100, it can consist of five different node types, each of cardinality 20.
The random search is an initial test of this idea. As described in Section 5, in the future the idea is to search for such heterogenous recurrent networks using a genetic algorithm as well.
3.5 META-LSTM FOR FITNESS PREDICTION
In both node and network architecture search, it takes about two hours to fully train a network until 40 epochs. With sufficient computing power it is possible to do it: for instance Zoph & Le (2016) used 800 GPUs for training multiple such solutions in parallel. However, if training time could be shortened, no matter what resources are available, those resources could be used better.
A common strategy for such situations is early stopping (Suganuma et al., 2017), i.e. selecting networks based on partial training. For example in case of recurrent networks, the training time
would be cut down to one fourth if the best network could be picked based on the 10th epoch validation loss instead of 40th. Figure3 demonstrates that this is not a good strategy, however. Networks that train faster in the initial epochs often end up with a higher final loss.
To overcome costly evaluation and to speed up evolution, a Meta-LSTM framework for fitness prediction was developed. Meta-LSTM is a sequence to sequence model (Sutskever et al., 2014) that consists of an encoder RNN and a decoder RNN (see Figure4(a)). Validation perplexity of the first 10 epochs is provided as sequential input to the encoder, and the decoder is trained to predict the validation loss at epoch 40 (show figure). Training data for these models is generated by fully training sample networks (i.e. until 40 epochs). The loss is the mean absolute error percentage at epoch 40. This error measure is used instead of mean squared error because it is unaffected by the magnitude of perplexity (poor networks can have very large perplexity values that overwhelm MSE). The hyperparameter values of the Meta-LSTM were selected based on its performance in the validation dataset. The best configuration that achieved an error rate of 3% includes an ensemble of two seq2seq models: one with a decoder length of 30 and the other with a decoder length of 1 (figure).
Recent approaches to network performance prediction include Bayesian modeling (Klein et al. (2017)) and regression curve fitting (Baker et al., 2017). The learning curves for which the above methods are deployed are much simpler as compared to the learning curves of structures discovered by evolution (see Appendix). Note that Meta-LSTM is trained separately and only deployed for use during evolution. Thus, networks can be partially trained with a 4× speedup, and assessed with near-equal accuracy as with full training.
4 EXPERIMENTS
Neural architectures were constructed for the language modeling task, using Meta-LSTM as the predictor of training performance. In the first experiment, homogeneous networks were constructed from single evolved nodes, and in the second, heterogeneous networks that consisted of multiple evolved nodes.
4.1 NATURAL LANGUAGE MODELING TASK
Experiments focused on the task of predicting the next word in the Penn Tree Bank corpus (PTB), a well-known benchmark for language modeling (Marcus et al., 1993). LSTM architectures in general tend to do well in this task, and improving them is difficult (Zaremba et al., 2014; Jozefowicz et al., 2015; Gal, 2015). The dataset consists of 929k training words, 73k validation words, and 82k test
words, with a vocabulary of 10k words. During training, successive minibatches of size 20 are used to traverse the training set sequentially.
4.2 MUSIC MODELING TASK
Music consists of a sequence of notes that often exhibit temporal dependence. Predicting future notes based on the previous notes can therefore be treated as a sequence prediction problem. Similar to natural language, musical structure can be captured using a music language model (MLM). Just like natural language models form an important component of speech recognition systems, polyphonic music language model is an integral part of Automatic music transcription (AMT). AMT is defined as the problem of extracting a symbolic representation from music signals, usually in the form of a time-pitch representation called piano-roll, or in a MIDI-like representation.
MLM predicts the probability distribution of the notes in the next time step. Multiple notes can be turned on at a given time step for playing chords. The input is a piano-roll representation, in the form of an 88× T matrixM , where T is the number of timesteps, and 88 corresponds to the number of keys on a piano, between MIDI notes A0 and C8. M is binary, such thatM [p, t] = 1 if and only if the pitch p is active at timestep t. In particular, held notes and repeated notes are not differentiated. The output is of the same form, except it only has T − 1 timesteps (the first timestep cannot be predicted since there is no previous information).
The dataset piano-midi.de is used as the benchmark data. This dataset holds 307 pieces of classical piano music from various composers. It was made by manually editing the velocities and the tempo curve of quantized MIDI files in order to give them a natural interpretation and feeling ( Ycart & Benetos (2017)). MIDI files encode explicit timing, pitch, velocity and instrumental information of the musical score.
4.3 NETWORK TRAINING DETAILS
During evolution, each network has two layers of 540 units each, and is unrolled for 35 steps. The hidden states are initialized to zero; the final hidden states of the current minibatch are used as the initial hidden states of the subsequent minibatch. The dropout rate is 0.4 for feedforward connections and 0.15 for recurrent connections (Gal, 2015). The network weights have L2 penalty of 0.0001. The evolved networks are trained for 10 epochs with a learning rate of 1; after six epochs the learning rate is decreased by a factor of 0.9 after each epoch. The norm of the gradients (normalized by
minibatch size) is clipped at 10. Training a network for 10 epochs takes about 30 minutes on an NVIDIA 1080 GPU. The following experiments were conducted on 40 such GPUs.
The Meta-LSTM consists of two layers, 40 nodes each. To generate training data for it, 1000 samples from a preliminary node evolution experiment was obtained, representing a sampling of designs that evolution discovers. Each of these sample networks was trained for 40 epochs with the language modeling training set; the perplexity on the language modeling validation set was measured in the first 10 epochs, and at 40 epochs. The Meta-LSTM network was then trained to predict the perplexity at 40 epochs, given a sequence of perplexity during the first 10 epochs as input. A validation set of 500 further networks was used to decide when to stop training the Meta-LSTM, and its accuracy measured with another 500 networks.
In line with Meta-LSTM training, during evolution each candidate is trained for 10 epochs, and tested on the validation set at each epoch. The sequence of such validation perplexity values is fed into the trained meta-LSTM model to obtain its predicted perplexity at epoch 40; this prediction is then used as the fitness for that candidate. The individual with the best fitness after 30 generations is scaled to a larger network consisting of 740 nodes in each layer. This setting matches the 32 Million parameter configuration used by Zoph & Le (2016). A grid search over drop-out rates is carried out to fine-tune the model. Its performance after 180 epochs of training is reported as the final result (Table 1)
4.4 EXPERIMENT 1: EVOLUTION OF RECURRENT NODES
A population of size 100 was evolved for 30 generations with a crossover rate of 0.6, insert and shrink mutation probability of 0.6 and 0.3, respectively, and modi rate (i.e. the probability that a newly added node is connected to memory cell output) of 0.3. A compatibility threshold of 0.3 was used for speciation; species is marked stagnated and added to the Hall of Shame if the best fitness among its candidates does not improve in four generations. Each node is allowed to have three outputs: one main recurrent output (h) and two native memory cell outputs (c and d).
The best evolved node is shown Figure5. The evolved node reuses inputs as well as utilize the extra memory cell pathways. As shown in Table 1, the evolved node (called GP Node evolution in the table) achieves a test performance of 68.2 for 20 Million parameter configuration on Penn Tree Bank. This is 2.8 perplexity points better than the test performance of the node discovered by NAS (Zoph(2016) in the table) in the same configuration. Evolved node also outperforms NAS in the 32 Million configuration (68.1 v/s. 66.5). Recent work has shown that sharing input and output embedding weight matrices of neural network language models improves performance (Press & Wolf, 2016). The experimental results obtained after including this method are marked as shared embeddings in Table 1.
It is also important to understand the impact of using meta LSTM in evolution. For this purpose, an additional evolution experiment was conducted, where each individual was assigned a fitness equal to its 10th epoch validation perplexity. As evolution progressed, in each generation, the best individual was trained fully till epoch 40. Similarly, the best individual from a evolution experiment with meta LSTM enabled was fully trained. The epoch 40 validation perplexity in these two cases has been plotted in Figure4(b). This figure demonstrates that individuals that are selected based upon meta LSTM prediction perform better than the ones selected using only partial training.
4.5 EXPERIMENT 2: HETEROGENEOUS RECURRENT NETWORKS
Top 10% of the population from 10 runs of Experiment 1 was collected into a pool 100 nodes. Out of these, 20 that were the most diverse, i.e. had the largest tree distance from the others, were selected for constructing heterogeneous layers (as shown in Figure1(c)). Nodes were chosen from this pool randomly to form 2000 such networks. Meta-LSTM was again used to speed up evaluation.
After hyperparameter tuning, the best network (for 25 Million parameter configuration )achieved a perplexity of 62.2, i.e. 0.8 better than the homogeneous network constructed from the best evolved node. This network is also 0.7 perplexity point better than the best NAS network double its size (54 Million parameters). Interestingly, best heterogeneous network was also found to be more robust to hyperparameter changes than the homogeneous network. This result suggests that diversity not only
improves performance, but also adds flexibility to the internal representations. The heterogeneous network approach therefore forms a promising foundation for future work, as discussed next.
4.6 EXPERIMENT 3: MUSIC MODELING
The piano-midi.de dataset is divided into train (60%), test (20%) and validation (20%) sets. The music model consists of a single recurrent layer of width 128. The input and output layers are 88 wide each. The network is trained for 50 epochs with Adam at a learning rate of 0.01. The network is trained by minimizing cross entropy between the output of the network and the ground truth. For evaluation, F1 score is computed on the test data. F1 score is the harmonic mean of precision and recall (higher is better). Since the network is smaller, regularization is not required.
Note, this setup is similar to that of Ycart & Benetos (2017). The goal of this experiment is not to achieve state-of-the-art results but to perform apples-to-apples comparison between LSTM nodes and evolved nodes (discovered for language) in a new domain i.e. music.
In this transfer experiment, three networks were constructed: the first with LSTM nodes, the second with NAS nodes, and the third with evolved nodes. All the three networks were trained under the same setting as described in the previous section. The F1 score of each of the three models is shown in Table 2. LSTM nodes outperform both NAS and evolved nodes. This result is interesting because both NAS and evolved nodes significantly outperformed LSTM nodes in the language-modeling task. This result suggests that NAS and evolved nodes are custom solution for a specific domain, and do not necessarily transfer to other domains.
However, the framework developed for evolving recurrent nodes for natural language can be transferred to the music domain as well. The setup is the same i.e. at each generation a population of recurrent nodes represented as trees will be evaluated for their performance in the music domain. The validation performance of the network constructed from the respective tree node will be used as the node fitness. The performance measure of the network in music domain is the F1 score, therefore, it is used as the network fitness value.
The evolution parameters are the same as those used for language modeling. Meta-LSTM is not used for this evolution experiment because the run-time of each network is relatively small (¡ 600 seconds). The results from evolving custom node for music are shown in Table 2. The custom node (GP Evolution (Music)) achieves an improvement of five points in F1 score over LSTM (Figure 6). Thus, evolution was able to discover custom structure for the music modeling domain as welland it was different from structure in the language domain.
5 DISCUSSION AND FUTURE WORK
The experiments in this paper demonstrate how evolutionary optimization can discover improvements to designs that have been essentially unchanged for 25 years. Because it is a populationbased method, it can harness more extensive exploration than other meta-learning techniques such as reinforcement learning, Bayesian parameter optimization, and gradient descent. It is therefore in a position to discover novel, innovative solutions that are difficult to develop by hand or through gradual improvement. Remarkably, the node that performed well in language modeling performed poorly in music modeling, but evolution was able to discover a different node that performed well in music. Apparently, the approach discovers regularities in each task and develops node structures that take advantage of them, thus customizing the nodes separately for each domain. Analyzing what those regularities are and how the structures encode them is an interesting direction of future work.
The GP-NEAT evolutionary search method in this paper is run in the same search space used by NAS (Zoph & Le, 2016), resulting in significant improvements. In a recent paper (Pham et al., 2018), the NAS search space was extended to include recurrent highway connections as well, improving the results further. An interesting direction of future work is thus to extend the GP-NEAT search space in a similar manner; similar improvements should result.
The current experiments focused on optimizing the structure of the gated recurrent nodes, cloning them into a fixed layered architecture to form the actual network. The simple approach of forming heterogeneous layers by choosing from a set of different nodes was shown to improve the networks further. A compelling next step is thus to evolve the network architecture as well, and further, coevolve it together with the LSTM nodes (Miikkulainen et al., 2018).
6 CONCLUSION
Evolutionary optimization of LSTM nodes can be used to discover new variants that perform significantly better than the original 25-year old design. The tree-based encoding and genetic programming approach makes it possible to explore larger design spaces efficiently, resulting in structures that are more complex and more powerful than those discovered by hand or through reinforcement-learning based neural architecture search. Further, these structures are customized to each specific domain. The approach can be further enhanced by optimizing the network level as well, in addition to the node structure, by training an LSTM network to estimate the final performance of candidates instead of having to train them fully, and by encouraging novelty through an archive. Evolutionary neural architecture search is therefore a promising approach to extending the abilities of deep learning networks to ever more challenging tasks.
A APPENDIX
A.1 TREE DISTANCE
δ(Ti, Tj) = β Ni,j − 2nSi,j Ni,j − 2 + (1− β)Di,j − 2dSi,j Di,j − 2 , (1)
where:
nTx = number of nodes in GP tree Tx,
dTx = depth of GP tree Tx,
Si,j = shared tree between Ti and Tj ,
Ni,j = nTi + nTj ,
Di,j = dTi + dTj ,
β ∈ [0, 1], δ ∈ [0, 1].
On the right-hand side of Equation 1, the first term measures the difference with respect to size, while the second term measures the difference in depth. Thus, setting β = 0.5 gives an equal importance to size and depth. Two trees will have a distance of zero if their structure is the same (irrespective of the actual element types).
A.2 EVOLVED SOLUTIONS | 1. What is the focus of the paper, and what are the proposed contributions regarding LSTM architecture search?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its simplicity and effectiveness?
3. What are some concerns or questions regarding the experimental results and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper or its methodology? | Review | Review
This paper explores evolutionary optimization for LSTM architecture search. To better explore the search space, authors used tree-based encoding and Genetic Programing (GP) with homologous crossover, tree distance metric, etc. The search process is pretty simple and fast. However, there is a lack of experiments and analysis to show the effectiveness of the search algorithm and of the architecture founded by the approach.
Remarks:
The contents provided in this paper is not enough to be convinced that this is a better approach for RNN architecture search and for sequence modeling tasks.
This paper requires more comparisons and analysis.
Experiments on Penn Tree Bank
- The dataset on both experiments are pretty small to know the effect of the new architecture they found. More experiments on larger datasets e.g., wikitext-2 will be needed.
- In the paper "On the state of the art of evaluation in neural language models", Melis et al., 2018 reported improvement using classic LSTM over other variations of LSTM. They intensively compared the performance of classic LSTM, NAS, and RHN (Recurrent Highway Network) as authors did. Melis et al. reported LSTM (with depth 1) can already achieve a test perplexity of 59.6 with 10M parameters and 59.5 with 24M parameters.
- Could you analyze a new finding of the LSTM architecture compared to the classic LSTM and NAS? Figure 5 and 6 are not very clear how are their final architectures different and the important/useful nodes changes for different tasks?
- Recently, there are a number of architecture search algorithms introduced, but there is only one comparison in this direction (Zoph&Le16). It is important to compare this approach with other architecture search methods. |
ICLR | Title
From Nodes to Networks: Evolving Recurrent Neural Networks
Abstract
Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. Remarkably, this node did not perform well in another task, music modeling, but it was possible to evolve a different node that did, demonstrating that the approach discovers customized structure for each task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.
1 INTRODUCTION
In many areas of engineering design, the systems have become so complex that humans can no longer optimize them, and instead, automated methods are needed. This has been true in VLSI design for a long time, but it has also become compelling in software engineering: The idea in ”programming by optimization” is that humans should design only the framework and the details should be left for automated methods such as optimization (Hoos, 2012). Recently similar limitations have started to emerge in deep learning. The neural network architectures have grown so complex that humans can no longer optimize them; hyperparameters and even entire architectures are now optimized automatically through gradient descent (et al., 2016), Bayesian parameter optimization (Malkomes et al., 2015), reinforcement learning (Zoph & Le, 2016; Baker et al., 2016), and evolutionary computation (Miikkulainen et al., 2018; Real, 2017; Fernando, 2017). Improvements from such automated methods are significant: the structure of the network matters.
This paper shows that the same approach can be used to improve architectures that have been used essentially unchanged for decades. The case in point is the Long Short-Term Memory (LSTM) network Hochreiter & Schmidhuber (1997). It was originally proposed in 1992; with the vastly increased computational power, it has recently been shown a powerful approach for sequential tasks such as speech recognition, language understanding, language generation, and machine translation, in some cases improving performance 40% over traditional methods (Bahdanau et al., 2015). The basic LSTM structure has changed very little in this process, and thorough comparisons of variants concluded that there’s little to be gained by modifying it further (Klaus et al., 2014; Jozefowicz et al., 2015).
However, very recent studies on metalearning methods such as neural architecture search and evolutionary optimization have shown that LSTM performance can be improved by complexifying it further (Zoph & Le, 2016; Miikkulainen et al., 2018). This paper develops a new method along these lines, recognizing that a large search space where significantly more complex node structures
can be constructed could be beneficial. The method is based on a tree encoding of the node structure so that it can be efficiently searched using genetic programming. Indeed, the approach discovers significantly more complex structures than before, and they indeed perform significantly better: Performance in the standard language modeling benchmark, where the goal is to predict the next word in a large language corpus, is improved by 6 perplexity points over the standard LSTM (Zaremba et al., 2014), and 0.9 perplexity points over reinforcement-learning based neural architecture search (Zoph & Le, 2016).
These improvements are obtained by constructing a homogeneous layered network architecture from a single gated recurrent node design. A second innovation in this paper shows that further improvement can be obtained by constructing such networks from multiple different designs. As a first step, allocation of different kinds of LSTM nodes into slots in the network is shown to improve performance by another 0.5 perplexity points. This result suggests that further improvements are possible with more extensive network-level search.
A third contribution of this paper is to show that evolution of neural network architectures in general can be speeded up significantly by using an LSTM network to predict the performance of candidate neural networks. After training the candidate for a few epochs, such a Meta-LSTM network predicts what performance a fully trained network would have. That prediction can then be used as fitness for the candidate, speeding up evolution fourfold in these experiments. A fourth contribution is to encourage exploration by using an archive of already-explored areas. The effect is similar to that of novelty search, but does not require a separate novelty objective, simplifying the search.
Interestingly, when the recurrent node evolved for language modeling was applied to another task, music modeling, it did not perform well. However, it was possible to evolve another solution for that task that did. As a fifth contribution, the results in this paper demonstrate that it is not simply the added complexity in the nodes that matter, but that it is the right kind, i.e. complexity customized for each task.
Thus, evolutionary optimization of complex deep learning architectures is a promising approach that can yield significant improvements beyond human ability to do so.
2 BACKGROUND AND RELATED WORK
In recent years, LSTM-based recurrent networks have been used to achieve strong results in the supervised sequence learning problems such as in speech recognition [10] and machine translation (Bahdanau et al., 2015). Further techniques have been developed to improve performance of these models through ensembling (Zaremba et al., 2014), shared embeddings (Zilly et al., 2016) and dropouts (Gal, 2015).
In contrast, previous studies have shown that modifying the LSTM design itself did not provide any significant performance gains (Bayer et al., 2009; Cho et al., 2014; Jozefowicz et al., 2015). However, a recent paper from Zoph & Le (2016) showed that policy gradients can be used to train a LSTM network to find better LSTM designs. The network is rewarded based on the performance of
the designs it generates. While this approach can be used to create new designs that perform well, its exploration ability is limited (as described in more detail in Section 3.3). The setup detailed in Zoph & Le (2016) is used for comparison in this paper. In a subsequent paper Pham et al. (2018), the same policy gradient approach is used to discover new recurrent highway networks to achieve even better results.
Neuroevolution methods like NEAT (Stanley & Miikkulainen, 2002) are an alternative to policy gradient approaches, and have also been shown to be sucessful in the architecture search problem (Miikkulainen et al., 2018; Real, 2017). For instance, Cartesian genetic programming was recently used to achieve state of the art results in CIFAR-10 (Suganuma et al., 2017). Along similar lines, a tree based variant of genetic programming is used in this paper to evolve recurrent nodes. These trees can grow in structure and can be pruned as well, thus providing a flexible representation.
Novelty search is a particularly useful technique to increase exploration in evolutionary optimization (Lehman, 2012). Novelty is often cast as a secondary objective to be optimized. It allows searching in areas that do not yield immediate benefit in terms of fitness, but make it possible to discover stepping stones that can be combined to form better solutions later. This paper proposes an alternative approach: keeping an archive of areas already visited and exploited, achieving similar goals without additional objectives to optimize.
Most architecture search methods reduce compute time by evaluating individuals only after partial training (Suganuma et al., 2017; Real, 2017). This paper proposes a meta LSTM framework to predict final network performance based on partial training results.
These techniques are described in detail in the next section.
3 METHODS
Evolving recurrent neural networks is an interesting problem because it requires searching the architecture of both the node and the network. As shown by recent research (Zoph & Le, 2016) (Zilly et al., 2016), the recurrent node in itself can be considered a deep network. In this paper, Genetic Programming (GP) is used to evolve such node architectures. In the first experiment, the overall network architecture is fixed i.e. constructed by repeating a single evolved node to form a layer (Figure1(b)). In the second, it is evolved by combining several different types of nodes into a layer (Figure1(c)). In the future more complex coevolution approaches are also possible.
Evaluating the evolved node and network is costly. Training the network for 40 epochs takes two hours on a 1080 NVIDIA GPU. A sequence to sequence model called meta-LSTM is developed to speed up evaluation. Following sections describe these methods in detail.
3.1 GENETIC PROGRAMMING FOR RECURRENT NODES
As shown in Figure1(a), a recurrent node can be represented as a tree structure, and GP can therefore be used to evolve it. However, standard GP may not be sufficiently powerful to do it. In particular, it does not maintain sufficient diversity in the population. Similar to the GP-NEAT approach by Trujillo et al. Tujillo et al. (2015), it can be augmented with ideas from NEAT speciation.
A recurrent node usually has two types of outputs. The first, denoted by symbol h in Figure1 (a), is the main recurrent output. The second, often denoted by c, is the native memory cell output. The h value is weighted and fed to three locations: (1) to the higher layer of the network at the same time step, (2) to other nodes in the network at the next time step, and (3) to the node itself at the next time step. Before propagation, h are combined with weighted activations from the previous layer, such as input word embeddings in language modeling, to generate eight node inputs (termed as base eight by Zoph & Le (2016)). In comparison, the standard LSTM node has four inputs (see Figure5(a)). The native memory cell output is fed back, without weighting, only to the node itself at the next time step. The connections within a recurrent cell are not trainable by backpropagation and they all carry a fixed weight of 1.0.
Thus, even without an explicit recurrent loop, the recurrent node can be represented as a tree. There are two type of elements in the tree: (1) linear activations with arity two (add, multiply), and (2) non-linear activations with arity one (tanh, sigmoid, relu, sin, cos).
There are three kind of mutation operations in the experiments: (1) Mutation to randomly replace an element with an element of the same type, (2) Mutation to randomly inserts a new branch at a random position in the tree. The subtree at the chosen position is used as child node of the newly created subtree. (3) Mutation to shrink the tree by choosing a branch randomly and replacing it with one of the branch’s arguments (also randomly chosen).
One limitation of standard tree is that it can have only a single output: the root. This problem can be overcome by using a modified representation of a tree that consists of Modi outputs (Zhang & Zhang, 2004). In this approach, with some probability p (termed modirate), non-root nodes can be connected to any of the possible outputs. A higher modi rate would lead to many sub-tree nodes connected to different outputs. A node is assigned modi (i.e. connected to memory cell outputs c or d) only if its sub-tree has a path from native memory cell inputs.
This representation allows searching for a wide range of recurrent node structures with GP.
3.2 SPECIATION AND CROSSOVER
One-point crossover is the most common type of crossover in GP. However, since it does not take into account the tree structure, it can often be destructive. An alternative approach, called homologous crossover (Francone et al., 1999), is designed to avoid this problem by crossing over the common regions in the tree. Similar tree structures in the population can be grouped into species, as is often done in NEAT (Tujillo et al., 2015). Speciation achieves two objectives: (1) it makes homologous crossover effective, since individuals within species are similar, and (2) it helps keep the population diverse, since selection is carried out separately in each species. A tree distance metric proposed by Tujillo et al. (2015) is used to determine how similar the trees are (see A.1 for detail).
In most GP implementations, there is a concept of the left and the right branch. A key extension in this paper is that the tree distance is computed by comparing trees after all possible tree rotations, i.e. swaps of the left and the right branch. Without such a comprehensive tree analysis, two trees that are mirror images of each other might end up into different species. This approach reduces the search space by not searching for redundant trees. It also ensures that crossover can be truly homologous Figure2 (a).
The structural mutations in GP, i.e. insert and shrink, can lead to recycling of the same strcuture across multiple generations. In order to avoid such repetitions, an archive called Hall of Shame
is maintained during evolution (Figure2(b)). This archive consists of individuals representative of stagnated species, i.e. regions in the architecture space that have already been discovered by evolution but are no longer actively searched. During reproduction, new offsprings are repeatedly mutated until they result in an individual that does not belong to Hall of Shame. Mutations that lead to Hall of Shame are not discarded, but instead used as stepping stones to generate better individuals. Such memory based evolution is similar to novelty search. However, unlike novelty search (Lehman, 2012), there is no additional fitness objective, simply an archive.
3.3 SEARCH SPACE: NODE
GP evolution of recurrent nodes starts with a simple fully connected tree. During the course of evolution, the tree size increases due to insert mutations and decreases due to shrink mutations. The maximum possible height of the tree is fixed at 15. However, there is no restriction on the maximum width of the tree.
The search space for the nodes is more varied and several orders of magnitude larger than in previous approaches. More specifically, the main differences from the state-of-the-art Neural Architecture Search (NAS) (Zoph & Le, 2016) are: (1) NAS searches for trees of fixed height 10 layers deep; GP searches for trees with height varying between six (the size of fully connected simple tree) and 15 (a constraint added to GP). (2) Unlike in NAS, different leaf elements can occur at varying depths in GP. (3) NAS adds several constraint to the tree structure. For example, a linear element in the tree is always followed by a non-linear element. GP prevents only consecutive non-linearities (they would cause loss of information since the connections within a cell are not weighted). (4) In NAS, inputs to the tree are used only once; in GP, the inputs can be used multiple times within a node.
Most gated recurrent node architectures consist of a single native memory cell (denoted by output c in Figure1(a)). This memory cell is the main reason why LSTMs perform better than simple RNNs. One key innovation introduced in this paper is to allow multiple native memory cells within a node. The memory cell output is fed back as input in the next time step without any modification, i.e. this recurrent loop is essentially a skip connection. Adding another memory cell in the node therefore does not effect the number of trainable parameters: It only adds to the representational power of the node.
3.4 SEARCH SPACE: NETWORK
Standard recurrent networks consist of layers formed by repetition of a single type of node. However, the search for better recurrent nodes through evolution often results in solutions with similar task performance but very different structure. Forming a recurrent layer by combining such diverse node solutions is potentially a powerful idea, related to the idea of ensembling, where different models are combined together to solve a task better.
In this paper, such heterogenous recurrent networks are constructed by combining diverse evolved nodes into a layer (Figure1(c)). A candidate population is created that consists of top-performing evolved nodes that are structurally very different from other nodes. The structure difference is calculated using the tree distance formula detailed previously. Each heterogenous layer is constructed by selecting nodes randomly from the candidate population. Each node is repeated 20 times in a layer; thus, if the layer size is e.g. 100, it can consist of five different node types, each of cardinality 20.
The random search is an initial test of this idea. As described in Section 5, in the future the idea is to search for such heterogenous recurrent networks using a genetic algorithm as well.
3.5 META-LSTM FOR FITNESS PREDICTION
In both node and network architecture search, it takes about two hours to fully train a network until 40 epochs. With sufficient computing power it is possible to do it: for instance Zoph & Le (2016) used 800 GPUs for training multiple such solutions in parallel. However, if training time could be shortened, no matter what resources are available, those resources could be used better.
A common strategy for such situations is early stopping (Suganuma et al., 2017), i.e. selecting networks based on partial training. For example in case of recurrent networks, the training time
would be cut down to one fourth if the best network could be picked based on the 10th epoch validation loss instead of 40th. Figure3 demonstrates that this is not a good strategy, however. Networks that train faster in the initial epochs often end up with a higher final loss.
To overcome costly evaluation and to speed up evolution, a Meta-LSTM framework for fitness prediction was developed. Meta-LSTM is a sequence to sequence model (Sutskever et al., 2014) that consists of an encoder RNN and a decoder RNN (see Figure4(a)). Validation perplexity of the first 10 epochs is provided as sequential input to the encoder, and the decoder is trained to predict the validation loss at epoch 40 (show figure). Training data for these models is generated by fully training sample networks (i.e. until 40 epochs). The loss is the mean absolute error percentage at epoch 40. This error measure is used instead of mean squared error because it is unaffected by the magnitude of perplexity (poor networks can have very large perplexity values that overwhelm MSE). The hyperparameter values of the Meta-LSTM were selected based on its performance in the validation dataset. The best configuration that achieved an error rate of 3% includes an ensemble of two seq2seq models: one with a decoder length of 30 and the other with a decoder length of 1 (figure).
Recent approaches to network performance prediction include Bayesian modeling (Klein et al. (2017)) and regression curve fitting (Baker et al., 2017). The learning curves for which the above methods are deployed are much simpler as compared to the learning curves of structures discovered by evolution (see Appendix). Note that Meta-LSTM is trained separately and only deployed for use during evolution. Thus, networks can be partially trained with a 4× speedup, and assessed with near-equal accuracy as with full training.
4 EXPERIMENTS
Neural architectures were constructed for the language modeling task, using Meta-LSTM as the predictor of training performance. In the first experiment, homogeneous networks were constructed from single evolved nodes, and in the second, heterogeneous networks that consisted of multiple evolved nodes.
4.1 NATURAL LANGUAGE MODELING TASK
Experiments focused on the task of predicting the next word in the Penn Tree Bank corpus (PTB), a well-known benchmark for language modeling (Marcus et al., 1993). LSTM architectures in general tend to do well in this task, and improving them is difficult (Zaremba et al., 2014; Jozefowicz et al., 2015; Gal, 2015). The dataset consists of 929k training words, 73k validation words, and 82k test
words, with a vocabulary of 10k words. During training, successive minibatches of size 20 are used to traverse the training set sequentially.
4.2 MUSIC MODELING TASK
Music consists of a sequence of notes that often exhibit temporal dependence. Predicting future notes based on the previous notes can therefore be treated as a sequence prediction problem. Similar to natural language, musical structure can be captured using a music language model (MLM). Just like natural language models form an important component of speech recognition systems, polyphonic music language model is an integral part of Automatic music transcription (AMT). AMT is defined as the problem of extracting a symbolic representation from music signals, usually in the form of a time-pitch representation called piano-roll, or in a MIDI-like representation.
MLM predicts the probability distribution of the notes in the next time step. Multiple notes can be turned on at a given time step for playing chords. The input is a piano-roll representation, in the form of an 88× T matrixM , where T is the number of timesteps, and 88 corresponds to the number of keys on a piano, between MIDI notes A0 and C8. M is binary, such thatM [p, t] = 1 if and only if the pitch p is active at timestep t. In particular, held notes and repeated notes are not differentiated. The output is of the same form, except it only has T − 1 timesteps (the first timestep cannot be predicted since there is no previous information).
The dataset piano-midi.de is used as the benchmark data. This dataset holds 307 pieces of classical piano music from various composers. It was made by manually editing the velocities and the tempo curve of quantized MIDI files in order to give them a natural interpretation and feeling ( Ycart & Benetos (2017)). MIDI files encode explicit timing, pitch, velocity and instrumental information of the musical score.
4.3 NETWORK TRAINING DETAILS
During evolution, each network has two layers of 540 units each, and is unrolled for 35 steps. The hidden states are initialized to zero; the final hidden states of the current minibatch are used as the initial hidden states of the subsequent minibatch. The dropout rate is 0.4 for feedforward connections and 0.15 for recurrent connections (Gal, 2015). The network weights have L2 penalty of 0.0001. The evolved networks are trained for 10 epochs with a learning rate of 1; after six epochs the learning rate is decreased by a factor of 0.9 after each epoch. The norm of the gradients (normalized by
minibatch size) is clipped at 10. Training a network for 10 epochs takes about 30 minutes on an NVIDIA 1080 GPU. The following experiments were conducted on 40 such GPUs.
The Meta-LSTM consists of two layers, 40 nodes each. To generate training data for it, 1000 samples from a preliminary node evolution experiment was obtained, representing a sampling of designs that evolution discovers. Each of these sample networks was trained for 40 epochs with the language modeling training set; the perplexity on the language modeling validation set was measured in the first 10 epochs, and at 40 epochs. The Meta-LSTM network was then trained to predict the perplexity at 40 epochs, given a sequence of perplexity during the first 10 epochs as input. A validation set of 500 further networks was used to decide when to stop training the Meta-LSTM, and its accuracy measured with another 500 networks.
In line with Meta-LSTM training, during evolution each candidate is trained for 10 epochs, and tested on the validation set at each epoch. The sequence of such validation perplexity values is fed into the trained meta-LSTM model to obtain its predicted perplexity at epoch 40; this prediction is then used as the fitness for that candidate. The individual with the best fitness after 30 generations is scaled to a larger network consisting of 740 nodes in each layer. This setting matches the 32 Million parameter configuration used by Zoph & Le (2016). A grid search over drop-out rates is carried out to fine-tune the model. Its performance after 180 epochs of training is reported as the final result (Table 1)
4.4 EXPERIMENT 1: EVOLUTION OF RECURRENT NODES
A population of size 100 was evolved for 30 generations with a crossover rate of 0.6, insert and shrink mutation probability of 0.6 and 0.3, respectively, and modi rate (i.e. the probability that a newly added node is connected to memory cell output) of 0.3. A compatibility threshold of 0.3 was used for speciation; species is marked stagnated and added to the Hall of Shame if the best fitness among its candidates does not improve in four generations. Each node is allowed to have three outputs: one main recurrent output (h) and two native memory cell outputs (c and d).
The best evolved node is shown Figure5. The evolved node reuses inputs as well as utilize the extra memory cell pathways. As shown in Table 1, the evolved node (called GP Node evolution in the table) achieves a test performance of 68.2 for 20 Million parameter configuration on Penn Tree Bank. This is 2.8 perplexity points better than the test performance of the node discovered by NAS (Zoph(2016) in the table) in the same configuration. Evolved node also outperforms NAS in the 32 Million configuration (68.1 v/s. 66.5). Recent work has shown that sharing input and output embedding weight matrices of neural network language models improves performance (Press & Wolf, 2016). The experimental results obtained after including this method are marked as shared embeddings in Table 1.
It is also important to understand the impact of using meta LSTM in evolution. For this purpose, an additional evolution experiment was conducted, where each individual was assigned a fitness equal to its 10th epoch validation perplexity. As evolution progressed, in each generation, the best individual was trained fully till epoch 40. Similarly, the best individual from a evolution experiment with meta LSTM enabled was fully trained. The epoch 40 validation perplexity in these two cases has been plotted in Figure4(b). This figure demonstrates that individuals that are selected based upon meta LSTM prediction perform better than the ones selected using only partial training.
4.5 EXPERIMENT 2: HETEROGENEOUS RECURRENT NETWORKS
Top 10% of the population from 10 runs of Experiment 1 was collected into a pool 100 nodes. Out of these, 20 that were the most diverse, i.e. had the largest tree distance from the others, were selected for constructing heterogeneous layers (as shown in Figure1(c)). Nodes were chosen from this pool randomly to form 2000 such networks. Meta-LSTM was again used to speed up evaluation.
After hyperparameter tuning, the best network (for 25 Million parameter configuration )achieved a perplexity of 62.2, i.e. 0.8 better than the homogeneous network constructed from the best evolved node. This network is also 0.7 perplexity point better than the best NAS network double its size (54 Million parameters). Interestingly, best heterogeneous network was also found to be more robust to hyperparameter changes than the homogeneous network. This result suggests that diversity not only
improves performance, but also adds flexibility to the internal representations. The heterogeneous network approach therefore forms a promising foundation for future work, as discussed next.
4.6 EXPERIMENT 3: MUSIC MODELING
The piano-midi.de dataset is divided into train (60%), test (20%) and validation (20%) sets. The music model consists of a single recurrent layer of width 128. The input and output layers are 88 wide each. The network is trained for 50 epochs with Adam at a learning rate of 0.01. The network is trained by minimizing cross entropy between the output of the network and the ground truth. For evaluation, F1 score is computed on the test data. F1 score is the harmonic mean of precision and recall (higher is better). Since the network is smaller, regularization is not required.
Note, this setup is similar to that of Ycart & Benetos (2017). The goal of this experiment is not to achieve state-of-the-art results but to perform apples-to-apples comparison between LSTM nodes and evolved nodes (discovered for language) in a new domain i.e. music.
In this transfer experiment, three networks were constructed: the first with LSTM nodes, the second with NAS nodes, and the third with evolved nodes. All the three networks were trained under the same setting as described in the previous section. The F1 score of each of the three models is shown in Table 2. LSTM nodes outperform both NAS and evolved nodes. This result is interesting because both NAS and evolved nodes significantly outperformed LSTM nodes in the language-modeling task. This result suggests that NAS and evolved nodes are custom solution for a specific domain, and do not necessarily transfer to other domains.
However, the framework developed for evolving recurrent nodes for natural language can be transferred to the music domain as well. The setup is the same i.e. at each generation a population of recurrent nodes represented as trees will be evaluated for their performance in the music domain. The validation performance of the network constructed from the respective tree node will be used as the node fitness. The performance measure of the network in music domain is the F1 score, therefore, it is used as the network fitness value.
The evolution parameters are the same as those used for language modeling. Meta-LSTM is not used for this evolution experiment because the run-time of each network is relatively small (¡ 600 seconds). The results from evolving custom node for music are shown in Table 2. The custom node (GP Evolution (Music)) achieves an improvement of five points in F1 score over LSTM (Figure 6). Thus, evolution was able to discover custom structure for the music modeling domain as welland it was different from structure in the language domain.
5 DISCUSSION AND FUTURE WORK
The experiments in this paper demonstrate how evolutionary optimization can discover improvements to designs that have been essentially unchanged for 25 years. Because it is a populationbased method, it can harness more extensive exploration than other meta-learning techniques such as reinforcement learning, Bayesian parameter optimization, and gradient descent. It is therefore in a position to discover novel, innovative solutions that are difficult to develop by hand or through gradual improvement. Remarkably, the node that performed well in language modeling performed poorly in music modeling, but evolution was able to discover a different node that performed well in music. Apparently, the approach discovers regularities in each task and develops node structures that take advantage of them, thus customizing the nodes separately for each domain. Analyzing what those regularities are and how the structures encode them is an interesting direction of future work.
The GP-NEAT evolutionary search method in this paper is run in the same search space used by NAS (Zoph & Le, 2016), resulting in significant improvements. In a recent paper (Pham et al., 2018), the NAS search space was extended to include recurrent highway connections as well, improving the results further. An interesting direction of future work is thus to extend the GP-NEAT search space in a similar manner; similar improvements should result.
The current experiments focused on optimizing the structure of the gated recurrent nodes, cloning them into a fixed layered architecture to form the actual network. The simple approach of forming heterogeneous layers by choosing from a set of different nodes was shown to improve the networks further. A compelling next step is thus to evolve the network architecture as well, and further, coevolve it together with the LSTM nodes (Miikkulainen et al., 2018).
6 CONCLUSION
Evolutionary optimization of LSTM nodes can be used to discover new variants that perform significantly better than the original 25-year old design. The tree-based encoding and genetic programming approach makes it possible to explore larger design spaces efficiently, resulting in structures that are more complex and more powerful than those discovered by hand or through reinforcement-learning based neural architecture search. Further, these structures are customized to each specific domain. The approach can be further enhanced by optimizing the network level as well, in addition to the node structure, by training an LSTM network to estimate the final performance of candidates instead of having to train them fully, and by encouraging novelty through an archive. Evolutionary neural architecture search is therefore a promising approach to extending the abilities of deep learning networks to ever more challenging tasks.
A APPENDIX
A.1 TREE DISTANCE
δ(Ti, Tj) = β Ni,j − 2nSi,j Ni,j − 2 + (1− β)Di,j − 2dSi,j Di,j − 2 , (1)
where:
nTx = number of nodes in GP tree Tx,
dTx = depth of GP tree Tx,
Si,j = shared tree between Ti and Tj ,
Ni,j = nTi + nTj ,
Di,j = dTi + dTj ,
β ∈ [0, 1], δ ∈ [0, 1].
On the right-hand side of Equation 1, the first term measures the difference with respect to size, while the second term measures the difference in depth. Thus, setting β = 0.5 gives an equal importance to size and depth. Two trees will have a distance of zero if their structure is the same (irrespective of the actual element types).
A.2 EVOLVED SOLUTIONS | 1. What is the main contribution of the paper regarding the application of tree-based genetic programming (GP) to RNN search?
2. What are the strengths and weaknesses of the proposed method compared to prior works, specifically in terms of search efficiency and architecture performance?
3. How does the reviewer assess the novelty and significance of the paper's contributions, particularly regarding the use of GP for RNNs and memory cells?
4. Do you have any concerns or suggestions regarding the comparisons made between GP and other methods, such as NAS and MAP-Elites?
5. What are the limitations of the paper, especially regarding the transferability of the proposed approach across different tasks and datasets? | Review | Review
The authors apply (tree-based) genetic programming (GP) to RNN search, or more specifically RNNs with memory cells, with the foremost example of this being the LSTM. GP provide a structured search that seems appropriate for designing NN modules, and has previously been applied successfully to evolving CNNs. However, the authors fail to mention that (tree-based) GP has been applied to evolving RNN topologies as far back as 2 decades ago, with even multiple cells in a single RNN unit [1]. The selection of more advanced techniques is good though - use of Modi for allowing multiple outputs, and neat-GP for more effective search (though a reference to the "hall of fame" [2] is lacking).
The authors claim that their method finds more complex, better performing structures than NAS, but allow their method to find architectures with more depth (max 15 vs. the max 10 of NAS), so this is an unfair comparison. It may be the case that GP scales better than the RL-based NAS method, but this is an unfair comparison as the max depth of NAS is not in principle limited to 10.
The second contribution of allowing heterogeneity in the layers of the network is rather minimal, but OK. Certainly, GP probably would have an advantage when searching at this level, as compared to other methods (like NAS). Performance prediction in architecture search has been done before, as noted by the authors (but see also [3]), so the particular form of training an LSTM on partial validation curves is also a minor contribution. Thirdly, concepts of archives have been in use for a long time [2], and the comparison to novelty search, which optimises for a hand-engineered novelty criteria, reaches beyond what is necessary. There are methods based on archives, such as MAP-Elites [4], which would make for a fairer comparison. However, I realise that novelty search is better known in the wider ML community, so from that perspective it is reasonable to keep this comparison in as well.
Finally, it is not surprising that GP applied to searching for an architecture for one task does not transfer well to another task - this is not specific to GP but ML methods in general, or more specifically any priors used and the training/testing scheme. That said, prior work has explicitly discussed problems with generalisation in GP [5].
[1] Esparcia-Alcazar, A. I., & Sharman, K. (1997). Evolving recurrent neural network architectures by genetic programming. Genetic Programming, 89-94.
[2] Rosin, C. D., & Belew, R. K. (1995, July). Methods for Competitive Co-Evolution: Finding Opponents Worth Beating. In ICGA (pp. 373-381).
[3] Zhou, Y., & Diamos, G. (2018). Neural Architect: A Multi-objective Neural Architecture Search with Performance Prediction. In SysML.
[4] Mouret, J. B., & Clune, J. (2015). Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909.
[5] Kushchu, I. (2002). An evaluation of evolutionary generalisation in genetic programming. Artificial Intelligence Review, 18(1), 3-14. |
ICLR | Title
Robust Reinforcement Learning using Adversarial Populations
Abstract
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribution as the solution to a zero-sum minimax game. However, existing work on learning solutions to the Robust RL formulation has primarily focused on training a single RL agent against a single adversary. In this work, we demonstrate that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary; the resulting policy is highly exploitable by new adversaries. We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training. We empirically validate across robotics benchmarks that the use of an adversarial population results in a less exploitable, more robust policy. Finally, we demonstrate that this approach provides comparable robustness and generalization as domain randomization on these benchmarks while avoiding a ubiquitous domain randomization failure mode.
1 INTRODUCTION
Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering. The complexity of the physical world means that the models used to design controllers are often inaccurate. Optimization based control design approaches, such as reinforcement learning (RL), have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch. In this work, we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics.
An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics (Tessler et al., 2019; Kamalaruban et al., 2020; Pinto et al., 2017). If a global Nash equilibrium of this problem is found, then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations. Besides the benefit of removing user design once the perturbation mechanism is specified, this approach is maximally conservative, which is useful for safety critical applications.
However, the literature on learning an adversary predominantly uses a single, stochastic adversary. This raises a puzzling question: the zero-sum game does not necessarily have any pure Nash equilibria (see Appendix C in Tessler et al. (2019)) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria. That is, the most general form of the minimax problem searches over distributions of adversary and agent policies, however, this problem is approximated in the literature by a search for a single agent-adversary pair. We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy.
The following example provides some intuition for why using a single adversary can decrease robustness. Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south. For a fixed, deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state. Once the adversary is removed, the robot will still apply the compensatory forces and
possibly become unstable. Stochastic Gaussian policies (ubiquitous in continuous control) offer little improvement: they cannot represent multi-modal perturbations. Under these standard policy parametrizations, we cannot use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south. This leaves the agent exploitable to this class of perturbations.
The use of a single adversary in the robustness literature is in contrast to the multi-player game literature. In multi-player games, large sets of adversaries are used to ensure that an agent cannot easily be exploited (Vinyals et al., 2019; Czarnecki et al., 2020; Brown & Sandholm, 2019). Drawing inspiration from this literature, we introduce RAP (Robustness via Adversary Populations): a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent. Returning to our example of a robot perturbed by wind, if the robot learns to cancel the north wind effectively, then that opens a niche for an adversary to exploit by applying forces in another direction. With a population, we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over.
Our contributions are as follows:
• Using a set of continuous robotics control tasks, we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples.
• We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries.
• We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization.
2 RELATED WORK
This work builds upon robust control (Zhou & Doyle, 1998), a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics. The Robust Markov Decision Process (R-MDP) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small, tabular MDPs (Nilim & El Ghaoui, 2005; Lim et al., 2013). For larger or continuous MDPs, one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem (Tamar et al., 2014).
One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective. Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning (RARL)(Pinto et al., 2017) and Noisy Robust Markov Decision Processes (NR-MDP) (Tessler et al., 2019) which differ in how they parametrize the adversaries: RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action. Both of these works attempt to find an equilibrium of the minimax objective using a single adversary; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary.
A strong alternative to the minimax objective, domain randomization, asks a designer to explicitly define a distribution over environments that the agent should be robust to. For example, (Peng et al., 2018) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world; (Antonova et al., 2017) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot. Additionally, domain randomization has been successfully used to build accurate object detectors solely from simulated data (Tobin et al., 2017) and to zero-shot transfer a quadcopter flight policy from simulation (Sadeghi & Levine, 2016).
The use of population based training is a standard technique in multi-agent settings. Alphastar, the grandmaster-level Starcraft bot, uses a population of "exploiter" agents that fine-tune against the bot to prevent it from developing exploitable strategies (Vinyals et al., 2019). (Czarnecki et al., 2020) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy. They empirically demonstrate that learning in games can often fail to converge without populations. Finally, Active Domain Randomization (Mehta et al., 2019) is a very close approach to ours, as they use a population
of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions. However, they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward.
3 BACKGROUND
In this work we use the framework of a multi-agent, finite-horizon, discounted, Markov Decision Process (MDP) (Puterman, 1990) defined by a tuple 〈Aagent × Aadversary, S, T , r, γ〉. Here Aagent is the set of actions for the agent, Aadversary is the set of actions for the adversary, S is a set of states, T : Aagent × Aadversary × S → ∆(S) is a transition function, r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor. S is shared between the adversaries as they share a state-space with the agent. The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [∑T t=0 γ tr(st, at)|πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ(st, at−1). We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics (e.g. different values of friction, mass, wind, etc.) and the system dynamics for a given state and action as st+1 ∼ fξ(st, at).
3.1 BASELINES
Here we outline prior work and the approaches that will be compared with RAP. Our baselines consist of a single adversary and domain randomization.
3.1.1 SINGLE MINIMAX ADVERSARY
Our adversary formulation uses the Noisy Action Robust MDP (Tessler et al., 2019) in which the adversary adds its actions onto the agent actions. The objective is
max θ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ]
min φ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ] (1)
where α is a hyperparameter controlling the adversary strength. This is a game in which the adversary and agent play simultaneously. We note an important restriction inherent to this adversarial model. Since the adversary is only able to attack the agent through the actions, there is a restricted class of dynamical systems that it can represent; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in. This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g. perturbing the transition function directly.
3.1.2 DYNAMICS RANDOMIZATION
Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to. This allows the user to directly encode knowledge about the likely deviations between training and testing domains. For example, the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction; they then specify that the agent will be trained with a wide range of possible friction values. We use ξ to denote some vector that parametrizes the set of training environments (e.g. friction, masses, system dynamics, etc.). We denote the domain over which ξ is drawn from as Ξ and use P (Ξ) to denote
some probability distribution over ξ. The domain randomization objective is
max θ Eξ∼P(Ξ)
[ Est+1∼fξ(st,at) [ T∑ t=0 γtr(st, at)|πθ ]] st+1 ∼ fξ(st, at) at ∼ πθ(st)
(2)
Here the goal is to find an agent that performs well on average across the distribution of training environment. Most commonly, and in this work, the parameters ξ are sampled uniformly over Ξ.
4 RAP: ROBUSTNESS VIA ADVERSARY POPULATIONS
RAP extends the minimax objective with a population based approach. Instead of a single adversary, at each rollout we will sample uniformly from a population of adversaries. By using a population, the agent is forced to be robust to a wide variety of potential perturbations rather than a single perturbation. If the agent begins to overfit to any one adversary, this opens up a potential niche for another adversary to exploit. For problems with only one failure mode, we expect the adversaries to all come out identical to the minimax adversary, but as the number of failure modes increases the adversaries should begin to diversify to exploit the agent. To induce this diversity, we will rely on randomness in the gradient estimates and randomness in the initializations of the adversary networks rather than any explicit term that induces diversity.
Denoting π̄φi as the i-th adversary and i ∼ U(1, n) as the discrete uniform distribution defined on 1 through n, the objective becomes
max θ Ei∼U(1,n) [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ]
min φi E [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ] ∀i = 1, . . . , n
st+1 ∼ f(st, at + αāt)
(3)
For a single adversary, this is equivalent to the minimax adversary described in Sec. 3.1.1. This is a game in which the adversary and agent play simultaneously.
We will optimize this objective by converting the problem into the equivalent zero-sum game. At the start of each rollout, we will sample an adversary index from the uniform distribution and collect a trajectory using the agent and the selected adversary. For notational simplicity, we assume the trajectory is of length T and that adversary i will participate in Ji total trajectories while, since the agent participates in every rollout, the agent will receive J total trajectories. We denote the j-th collected trajectory for the agent as τj = (s0, a0, r0, s1) × · · · × (sM , aM , rM , sM+1) and the associated trajectory for adversary i as τ ij = (s0, a0,−r0, s1) × · · · × (sM , aM ,−rM , sM ). Note that the adversary reward is simply the negative of the agent reward. We will use Proximal Policy Optimization (Schulman et al., 2017) (PPO) to update our policies. We caution that we have overloaded notation slightly here and for adversary i, τ ij=1:Ji refers only to the trajectories in which the adversary was selected: adversaries will only be updated using trajectories where they were active.
At the end of a training iteration, we update all our policies using gradient descent. The algorithm is summarized below:
Algorithm 1: Robustness via Adversary Populations Initialize θ, φ1 · · ·φn using Xavier initialization (Glorot & Bengio, 2010); while not converged do
for rollout j=1...J do sample adversary i ∼ U(1, n); run policies πθ, π̄φi in environment until termination; collect trajectories τj , τ ij end update θ, φ1 · · ·φn using PPO (Schulman et al., 2017) and trajectories τj for θ and τ ij for each φi;
end
5 EXPERIMENTS
In this section we present experiments on continuous control tasks from the OpenAI Gym Suite (Brockman et al., 2016; Todorov et al., 2012). We compare with the existing literature and evaluate the efficacy of a population of learned adversaries across a wide range of state and action space sizes. We investigate the following hypotheses:
H1. Agents are more likely to overfit to a single adversary than a population of adversaries, leaving them less robust on in-distribution tasks.
H2. Agents trained against a population of adversaries will generalize better, leading to improved performance on out-of-distribution tasks.
In-distribution tasks refer to the agent playing against perturbations that are in the training distribution: adversaries that add their actions onto the agent. However, the particular form of the adversary and their restricted perturbation magnitude means that there are many dynamical systems that they cannot represent (for example, significant variations of joint mass and friction). These tasks are denoted as out-of-distribution tasks. All of the tasks in the test set described in Sec. 5.1 are likely out-of-distribution tasks.
5.1 EXPERIMENTAL SETUP AND HYPERPARAMETER SELECTION
While we provide exact details of the hyperparameters in the Appendix, adversarial settings require additional complexity in hyperparameter selection. In the standard RL procedure, optimal hyperparameters are selected on the basis of maximum expected cumulative reward. However, if an agent playing against an adversary achieves a large cumulative reward, it is possible that the agent was simply playing against a weak adversary. Conversely, a low score does not necessarily indicate a strong adversary nor robustness: it could simply mean that we trained a weak agent.
To address this, we adopt a version of the train-validate-test split from supervised learning. We use the mean policy performance on a suite of validation tasks to select the hyperparameters, then we train the policy across ten seeds and report the resultant mean and standard deviation over twenty trajectories. Finally, we evaluate the seeds on a holdout test set of eight additional model-mismatch tasks. These tasks vary significantly in difficulty; for visual clarity we report the average across tasks in this paper and report the full breakdown across tasks in the Appendix.
We experiment with the Hopper, Ant, and Half Cheetah continuous control environments used in the original RARL paper Pinto et al. (2017); these are shown in Fig. 1. To generate the validation model mismatch, we pre-define ranges of mass and friction coefficients as follows: for Hopper, mass ∈ [0.7, 1.3] and friction ∈ [0.7, 1.3]; Half Cheetah and Ant, mass ∈ [0.5, 1.5] and friction ∈ [0.1, 0.9]. We scale the friction of every Mujoco geom and the mass of the torso with the same (respective) coefficients. We compare the robustness of agents trained via RAP against: 1) agents trained against a single adversary in a zero-sum game, 2) oracle agents trained using domain randomization, and 3) an agent trained only using PPO and no perturbation mechanism. To train the domain randomization
oracle, at each rollout we uniformly sample a friction and mass coefficient from the validation set ranges. We then scale the friction of all geoms and the mass of the torso by their respective coefficients; this constitutes directly training on the validation set. To generate the test set of model mismatch, we take both the highest and lowest friction coefficients from the validation range and apply them to different combinations of individual geoms. For the exact selected combinations, please refer to the Appendix.
As further validation of the benefits of RAP, we include an additional set of experiments on a continuous control task, a gridworld maze search task, and a Bernoulli Bandit task in Appendix Sec. F. Finally, we note that both our agent and adversary networks are two layer-neural networks with 64 hidden units in each layer and a tanh nonlinearity.
6 RESULTS
H1. In-Distribution Tasks: Analysis of Overfitting A globally minimax optimal adversary should be unexploitable and perform equally well against any adversary of equal strength. We investigate the optimality of our policy by asking whether the minimax agent is robust to swaps of adversaries from different training runs, i.e. different seeds. Fig. 2 shows the result of these swaps for the one adversary and three adversary case. The diagonal corresponds to playing against the adversaries the agent was trained with while every other square corresponds to playing against adversaries from a different seed. To simplify presentation, in the three adversary case, each square is the average performance against all the adversaries from that seed. We observe that the agent trained against three adversaries (top row right) is robust under swaps while the single adversary case is not (top row left). The agent trained against a single adversary is highly exploitable, as can be seen by its extremely sub-par performance against an adversary from any other seed. Since the adversaries off-diagonal are feasible adversaries, this suggests that we have found a poor local optimum of the objective.
In contrast, the three adversary case is generally robust regardless of which adversary it plays against, suggesting that the use of additional adversaries has made the agent more robust. One possible hypothesis for why this could be occurring is that the adversaries in the "3 adversary" case are somehow weaker than the adversaries in the "1 adversary" case. The middle row of the figure shows that it is not the case that the improved performance of the agent playing against the three adversaries is due to some weakness of the adversaries. If anything, the adversaries from the three adversary case are stronger as the agent trained against 1 adversary does extremely poorly playing against the three adversaries (left) whereas the agent trained against three adversaries still performs well when playing against the adversaries from the single-adversary runs. Finally, the bottom row investigates how an agent trained with domain randomization fairs against adversaries from either training regimes. In neither case is the domain randomization agent robust on these tasks.
H2. Out-of-Distribution Tasks: Robustness and Generalization of Population Training
Here we present the results from the validation and holdout test sets described in Section 5.1. We compare the performance of training with adversary populations of size three and five against vanilla PPO, the domain randomization oracle, and the single minimax adversary. We refer to domain randomization as an oracle as it is trained directly on the test distribution.
Fig.6 shows the average reward (the average of ten seeds across the validation or test sets respectively) for each environment. Table 1 gives the corresponding numerical values and the percent change of each policy from the baseline. Standard deviations are omitted on the test set due to wide variation in task difficulty; the individual tests that we aggregate here are reported in the Appendix with
appropriate error bars. In all environments we achieve a higher reward across both the validation and holdout test set using RAP of size three and/or five when compared to the single minimax adversary case. These results from testing on new environments with altered dynamics supports hypothesis H2. that training with a population of adversaries leads to more robust policies than training with a single adversary in out-of-distribution tasks. Furthermore, while the performance is only comparable with the domain randomization oracle, the adversarial approach does not require prior engineering of appropriate randomizations. Furthermore, despite domain randomization being trained directly on these out-of-distribution tasks, domain randomization can have serious failure modes of domain randomization due to its formulation. A detailed analysis of this can be found in Appendix E.
For a more detailed comparison of robustness across the validation set, Fig. 4 shows heatmaps of the performance across all the mass, friction coefficient combinations. Here we highlight the heatmaps for Hopper and Half Cheetah for vanilla PPO, domain randomization oracle, single adversary, and best adversary population size. Additional heatmaps for other adversary population sizes and the Ant environment can be found in the Appendix. Note that Fig. 4 is an example of a case where a single adversary has negligible effect on or slightly reduces the performance of the resultant policy on the
validation set. This supports our hypothesis that a single adversary can actually lower the robustness of an agent.
7 CONCLUSIONS AND FUTURE WORK
In this work we demonstrate that the use of a single adversary to approximate the solution to a minimax problem does not consistently lead to improved robustness. We propose a solution through the use of multiple adversaries (RAP), and demonstrate that this provides robustness across a variety of robotics benchmarks. We also compare RAP with domain randomization and demonstrate that while DR can lead to a more robust policy, it requires careful parametrization of the domain we sample from to ensure robustness. RAP does not require this tuning, allowing for use in domains where appropriate tuning requires extensive prior knowledge or expertise.
There are several open questions stemming from this work. While we empirically demonstrate the effects of RAP, we do not have a compelling theoretical understanding of why multiple adversaries are helping. Perhaps RAP helps approximate a mixed Nash equilibrium as discussed in Sec. 1 or
perhaps population based training increases the likelihood that one of the adversaries is strong? Would the benefits of RAP disappear if a single adversary had the ability to represent mixed Nash?
There are some extensions of this work that we would like to pursue. We have looked at the robustness of our approach in simulated settings; future work will examine whether this robustness transfers to real-world settings. Additionally, our agents are currently memory-less and therefore cannot perform adversary identification; perhaps memory leads to a system-identification procedure that improves transfer performance. Our adversaries can also be viewed as forming a task distribution, allowing them to be used in continual learning approaches like MAML (Nagabandi et al., 2018) where domain randomization is frequently used to construct task distributions.
A FULL DESCRIPTION OF THE CONTINUOUS CONTROL MDPS
We use the Mujoco ant, cheetah, and hopper environments as a test of the efficacy of our strategy versus the 0 adversary, 1 adversary, and domain randomization baselines. We use the Noisy Action Robust MDP formulation Tessler et al. (2019) for our adversary parametrization. If the normal system dynamics are
sk+1 = sk + f(sk, ak)∆t
the system dynamics under the adversary are
sk+1 = sk + f(sk, ak + a adv k )∆t
where aadvk is the adversary action at time k.
The notion here is that the adversary action is passed through the dynamics function and represents some additional set of dynamics. It is standard to clip actions within some boundary but for the above reason, we clip the agent and adversary actions separately. Otherwise, an agent would be able to limit the effect of the adversary by always taking actions at the bounds of its clipping range. The agent is clipped between [−1, 1] in the Hopper environment and the adversary is clipped between [−.25, .25]. The MDP through which we train the agent policy is characterized by the following states, actions, and rewards:
• sagentt = [ot, at] where ot is an observation returned by the environment, and at is the action taken by the agent.
• We use the standard rewards provided by the OpenAI Gym Mujoco environments at https: //github.com/openai/gym/tree/master/gym/envs/mujoco. For the exact functions, please refer to the code at ANONYMIZED. • aagentt ∈ [amin, amax] n.
The MDP for adversary i is the following:
• st = sagentt . The adversary sees the same states as the agent. • The adversary reward is the negative of the agent reward. • aadvt ∈ [ aadvmin, a adv max ]n .
For our domain randomization Hopper baseline, we use the following randomization: at each rollout, we scale the friction of all joints by a single value uniformly sampled from [0.7, 1.3]. We also randomly scale the mass of the ’torso’ link by a single value sampled from [0.7, 1.3]. For Half-Cheetah and Ant the range for friction is [0.1, 0.9] and for mass the range is [0.5, 1.5].
B INCREASING ADVERSARY POOL SIZE
We investigate whether RAP is robust to adversary number as this would be a useful property to minimize hyperparameter search. Here we hypothesize that while having more adversaries can represent a wider range of dynamics to learn to be robust to, we expect there to be diminishing returns due to the decreased batch size that each adversary receives (total number of environment steps is held constant across all training variations). We expect decreasing batch size to lead to worse agent policies since the batch will contain under-trained adversary policies. We cap the number of adversaries at eleven as our machines ran out of memory at this value. We run ten seeds for every adversary value and Fig. 5 shows the results for Hopper. Agent robustness on the test set increases monotonically up to three adversaries and roughly begins to decrease after that point. This suggests that a trade-off between adversary number and performance exists although we do not definitively show that diminishing batch sizes is the source of this trade-off. However, we observe in Fig. 6 that both three and five adversaries perform well across all studied Mujoco domains.
C HOLDOUT TESTS
In this section we describe in detail all of the holdout tests used.
C.1 HOPPER
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the hopper ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 1.3. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.7. The body geoms and their names are visible in Fig. 7.
The exact combinations and the corresponding test name are indicated in Table 2 for Hopper.
C.2 CHEETAH
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the
cheetah ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1. The body geoms and their names are visible in Fig. 8.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
C.3 ANT
We will use torso to indicate the head piece, leg to refer to one of the four legs that contact the ground, and ’aux’ to indicate the geom that connects the leg to the torso. Since the ant is symmetric we adopt a convention that two of the legs are front-left and front-right and two legs are back-left and back-right. Fig. 9 depicts the convention. For the Mujoco holdout transfer tests we pick a subset of the ant ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
D RESULTS
Here we recompute the values of all the results and display them with appropriate standard deviations in tabular form.
There was not space for the ant validation set results so they are reproduced here.
E CHALLENGES OF DOMAIN RANDOMIZATION
In our experiments, we find that naive parametrization of domain randomization can result in a brittle policy, even when evaluated on the same distribution it was trained on.
Effect of Domain Randomization Parametrization
From Fig. 6, we see that in the Ant and Hopper domains, the DR oracle achieves the highest transfer reward in the validation set as expected since the DR oracle is trained directly on the validation set. Interestingly, we found that the domain randomization policy performed much worse on the Half Cheetah environment, despite having access to the mass and friction coefficients during training. Looking at the performance for each mass and friction combination in Fig. 11, we found that the DR agent was able to perform much better at the low friction coefficients and learned to prioritize those values at the cost of significantly worse performance on average. This highlights a potential issue with domain randomization: while training across a wide variety of dynamics parameters can increase robustness, naive parametrizations can cause the policy to exploit subsets of the randomized domain and lead to a brittle policy. This is a problem inherent to the expectation across domains that is used in domain randomization; if some subset of randomizations have sufficiently high reward the agent will prioritize performance on those at the expense of robustness.
We hypothesize that this is due to the DR objective in Eq. 2 optimizing in expectation over the sampling range. To test this, we created a separate range of ‘good’ friction parameters [0.5, 1.5] and compared the robustness of a DR policy trained with ‘good‘ range against a DR policy trained with ‘bad’ range [0.1, 0.9] in Fig. 11. Here we see that a ‘good’ parametrization leads to the expected result where domain randomization is the most robust. We observe that domain randomization underperforms adversarial training on the validation set despite the validation set literally constituting the training set for domain randomization. This suggests that underlying optimization difficulties caused by significant variations in reward scaling are partially to blame for the poor performance of domain randomization. Notably, the adversary-based methods are not susceptible to the same parametrization issues.
Alternative DR policy architecture
As discussed above and also identified in Rajeswaran et al. (2016), the expectation across randomizations that is used in domain randomization causes it to prioritize a policy that performs well in a high-reward subset of the randomization domains. This is harmless when domain randomization is used for randomizations of state, such as color, where all the randomization environments have the same expected reward, but has more pernicious effects in dynamics randomizations. Consider a set of N randomization environments, N − 1 of which have reward Rlow and one of which has
has reward Rhigh where Rhigh >> Rlow. If the agent cannot identify which of the randomization environments it is in, the intuitively optimal solution is to pick the policy that optimizes the high reward environment. One possible way out of the quandary is to use an agent that has some memory, such as an LSTM-based policy, thus giving the possibility of identifying which environment the agent is in and deploying the appropriate response. However, if Rhigh is sufficiently large and there is some reduction in reward associated with performing the system-identification necessary to identify the randomization, then the agent will not perform the system identification and will prioritize achieving Rhigh. As an illustration of this challenge, Fig. 12 compares the results of domain randomization on the half-cheetah environment with and without memory. In the memory case, we use a 64 unit LSTM. As can be seen, there is an improvement in the ability of the domain randomized policy to perform well on the full range of low-friction / high mass values, but the improved performance does not extend to higher friction values. In fact, the performance contrast is enhanced even further as the policy does a good deal worse on the high friction values than the case without memory.
F ADDITIONAL EXPERIMENTS
Here we outline a few more experiments we ran that demonstrate the value of additional adversaries. We run the following tasks:
F.1 DEEPMIND CONTROL CATCH
This task uses the same Markov Decision Process described in Sec. A. The challenge (Tassa et al., 2020), pictured in Fig. 13, is to get the ball to fall inside the cup. As in the other continuous control
tasks, we apply the adversary to the actions of the agents (which is controlling the cup). We then test on variations of the mass of both the ball and the cup. The heatmaps for this task are presented in Fig. 14 where the 3 adversary case provides a slight improvement in the robustness region relative to the 1 adversary case.
F.2 MULTI-ARMED BERNOULLI BANDITS
As an illustrative example, we examine a multi-armed stochastic bandit, a problem widely studied in reinforcement learning literature. Generally, successful strategies for multi-arm bandit problems involve successfully balancing the exploration across arms and exploiting the ’best’ arm. A "robust" strategy should have vanishing regret as the time horizon goes to infinity. We construct a 10-armed bandit where each arm i is parametrized by a value p where p is the probability of that arm returning
1. The goal of the agent is to minimize total cumulative regret Rn over a horizon of n steps:
Rn = nmax i µi − E [ n∑ t=0 at ] where at corresponds to picking a particular arm. At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs padded with zeros to keep the length fixed. The adversary has a horizon of 1; at time-step zero it receives an observation of 0 and outputs the probability for each arm. At the termination of the horizon the adversary receives the negative of the cumulative agent reward. For our domain randomization baseline we use uniform sampling of the p value for each arm. We chose a horizon length of T = 100 steps. The MDP of the agent is characterized as follows:
• st = [ 0n∗(T−t)×1, rt, at, rt−1, at−1, , . . . , r0, a0 ] • rt = X(ai)−maxi µi • aagentt ∈ 0 . . . 9
At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs. The buffer matching the horizon length is padded with zeros. For each training step, the agent receives a reward of the negative expected regret. We set up the adversary problem as an MDP with a horizon of 1.
• st = [0.0] • r = − ∑T i=1 rt • aadv ∈ [0, 1]10
During adversarial training, we sample a random adversary at the beginning of each rollout, and allow it to pick 10 p values that are then shuffled randomly and then assigned to each arm (this is to prevent the agent from deterministically knowing which arm has which p value). The adversary is always given an observation of a vector of zeros and is rewarded once at the end of the rollout. We also construct a hold-out test of two bandit examples which we colloquially refer to as "evenly spread" and "one good arm." In "evenly spread", the arms, going from 1 to 10 have evenly spaced probabilities in steps of 0.1 0, 0.1, 0.2, 0.3, . . . 0.8, 0.9. In "one good arm" 9 arms have probability 0.1 and one arm has probability 0.9. As our policy for the agent, we use a Gated Recurrent Unit network with hidden size 256.
An interesting feature of the bandit task is that it makes clear that the single adversary approach corresponds to training on a single, adversarially constructed bandit instance. Surprisingly, as indicated in Fig. 15, this does not perform terribly on our two holdout tasks. However, there is a clear improvement on both tasks in the four adversary case. All adversarial approaches outperform an Upper Confidence Bound-based expert (shown in red). Interestingly, domain randomization, which had superficially good reward at training time, completely fails on the "one good arm" holdout task. This suggests another possible failure mode of domain randomization where in high dimensions uniform sampling may just fail to yield interesting training tasks. Finally, we note that since the upper confidence approach only tries to minimize regret asymptotically, our outperforming it may simply be due to our relatively short horizon; we simply provide it as a baseline.
G COST AND HYPERPARAMETERS
Here we reproduce the hyperparameters we used in each experiment and compute the expected runtime and cost of each experiment. Numbers indicated in {} were each used for one run. Otherwise the parameter was kept fixed at the indicated value.
G.1 HYPERPARAMETERS
For Mujoco the hyperparameters are:
• Learning rate:
– {.0003, .0005} for half cheetah – {.0005, .00005} for hopper and ant
• Generalized Advantage Estimation λ – {0.9, 0.95, 1.0} for half cheetah – {0.5, 0.9, 1.0} for hopper and ant
• Discount factor γ = 0.995 • Training batch size: 100000 • SGD minibatch size: 640 • Number of SGD steps per iteration: 10 • Number of iterations: 700 • We set the seed to 0 for all hyperparameter runs. • The maximum horizon is 1000 steps.
For the validation across seeds we used 10 seeds ranging from 0 to 9. All other hyperparameters are the default values in RLlib Liang et al. (2017) 0.8.0
G.2 COST
For all of our experiments we used AWS EC2 c4.8xlarge instances which come with 36 virtual CPUs. For the Mujoco experiments, we use 2 nodes and 11 CPUs per hyper-parameter, leading to one full hyper-parameter sweep fitting onto the 72 CPUs. We run the following set of experiments and ablations, each of which takes 8 hours.
• 0 adversaries • 1 adversary • 3 adversaries • 5 adversaries • Domain randomization
for a total of 5 experiments for each of Hopper, Cheetah, Ant. For the best hyperparameters and each experiment listed above we run a seed search with 6 CPUs used per-seed, a process which takes about 12 hours. This leads to a total of 2 ∗ 8 ∗ 5 ∗ 3 + 2 ∗ 12 ∗ 3 ∗ 5 = 600 node hours and 36 ∗ 600 ≈ 22000
CPU hours. At a cost of ≈ 0.3 dollars per node per hour for EC2 spot instances, this gives ≈ 180 dollars to fully reproduce our results for this experiment. If the chosen hyperparameters are used and only the seeds are sweep, this is ≈ 100 dollars.
G.3 RUN TIME AND SAMPLE COMPLEXITY
Here we briefly analyze the expected run-time of our algorithms. While there is an additional cost for adding a single adversary equal to the sum of the cost of computing gradients at train time and actions at run-time for an additional agent, there is no additional cost for adding additional adversaries. Since we divide the total set of samples per iteration amongst the adversaries, we compute approximately the same number of gradients and actions in the many-adversary case as we do in the single adversary case. In Fig. 16 plot of reward vs. wall-clock time supports this argument: the 0 adversary case runs the fastest but all the different adversary numbers complete 700 iterations of training in approximately the same amount of time. Additionally, Fig. 17 demonstrates that there is some variation in sample complexity but the trend is not consistent across adversary number.
G.4 CODE
Our code is available at ANONYMIZED. For our reinforcement learning code-base we used RLlib Liang et al. (2017) version 0.8.0 and did not make any custom modifications to the library.
H PURE NASH EQUILIBRIA DO NOT NECESSARILY EXIST
While there are canonical examples of games in which pure Nash equilibria do not exist such as rock-paper-scissors, we are not aware one for sequential games with continuous actions. Tessler et al. (2019) contains an example of a simple, horizon 1 MDP where duality is not satisfied. The pure minimax solution does not equal the value of the pure maximin solution and a greater value can be achieved by randomizing one of the policies showing that there is no pure equilibrium. | 1. What is the focus of the paper in terms of extending existing work in robust adversarial RL?
2. What are the strengths of the paper regarding its experimental results?
3. What are the weaknesses of the paper concerning its conceptual novelty?
4. How does the reviewer assess the significance of the proposed approach in combining augmentation and adversarial RL?
5. Is the contribution of the paper sufficient for publication in a top-tier conference like ICLR? | Review | Review
This paper extends the existing work on robust adversarial RL by training multiple adversarial agents from a population. Solid experimental results are presented to show that the proposed method improves the single adversary setting and domain randomization.
The experimental results in this paper seem solid to me. My biggest concern for this paper is that the conceptual novelty seems quite incremental. I mean, on the conceptual level, it is a common sense from robust control that multiple uncertainty sources can be treated together (e.g. the robust control theory handles the structured uncertainty in such a manner). One can augment all the adversary networks as a big network and then the problem formulation is the same as before. If we think each phi_i is a block in this big "single adversary network", then what the authors have done can be thought as doing block coordinate descent. From this perspective, there is not too much conceptual novelty, and the main contribution of this paper is doing some more detailed study showing how to combine augmentation and adversarial RL. I am not sure whether such a contribution itself is enough for ICLR or not. |
ICLR | Title
Robust Reinforcement Learning using Adversarial Populations
Abstract
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribution as the solution to a zero-sum minimax game. However, existing work on learning solutions to the Robust RL formulation has primarily focused on training a single RL agent against a single adversary. In this work, we demonstrate that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary; the resulting policy is highly exploitable by new adversaries. We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training. We empirically validate across robotics benchmarks that the use of an adversarial population results in a less exploitable, more robust policy. Finally, we demonstrate that this approach provides comparable robustness and generalization as domain randomization on these benchmarks while avoiding a ubiquitous domain randomization failure mode.
1 INTRODUCTION
Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering. The complexity of the physical world means that the models used to design controllers are often inaccurate. Optimization based control design approaches, such as reinforcement learning (RL), have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch. In this work, we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics.
An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics (Tessler et al., 2019; Kamalaruban et al., 2020; Pinto et al., 2017). If a global Nash equilibrium of this problem is found, then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations. Besides the benefit of removing user design once the perturbation mechanism is specified, this approach is maximally conservative, which is useful for safety critical applications.
However, the literature on learning an adversary predominantly uses a single, stochastic adversary. This raises a puzzling question: the zero-sum game does not necessarily have any pure Nash equilibria (see Appendix C in Tessler et al. (2019)) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria. That is, the most general form of the minimax problem searches over distributions of adversary and agent policies, however, this problem is approximated in the literature by a search for a single agent-adversary pair. We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy.
The following example provides some intuition for why using a single adversary can decrease robustness. Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south. For a fixed, deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state. Once the adversary is removed, the robot will still apply the compensatory forces and
possibly become unstable. Stochastic Gaussian policies (ubiquitous in continuous control) offer little improvement: they cannot represent multi-modal perturbations. Under these standard policy parametrizations, we cannot use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south. This leaves the agent exploitable to this class of perturbations.
The use of a single adversary in the robustness literature is in contrast to the multi-player game literature. In multi-player games, large sets of adversaries are used to ensure that an agent cannot easily be exploited (Vinyals et al., 2019; Czarnecki et al., 2020; Brown & Sandholm, 2019). Drawing inspiration from this literature, we introduce RAP (Robustness via Adversary Populations): a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent. Returning to our example of a robot perturbed by wind, if the robot learns to cancel the north wind effectively, then that opens a niche for an adversary to exploit by applying forces in another direction. With a population, we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over.
Our contributions are as follows:
• Using a set of continuous robotics control tasks, we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples.
• We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries.
• We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization.
2 RELATED WORK
This work builds upon robust control (Zhou & Doyle, 1998), a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics. The Robust Markov Decision Process (R-MDP) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small, tabular MDPs (Nilim & El Ghaoui, 2005; Lim et al., 2013). For larger or continuous MDPs, one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem (Tamar et al., 2014).
One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective. Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning (RARL)(Pinto et al., 2017) and Noisy Robust Markov Decision Processes (NR-MDP) (Tessler et al., 2019) which differ in how they parametrize the adversaries: RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action. Both of these works attempt to find an equilibrium of the minimax objective using a single adversary; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary.
A strong alternative to the minimax objective, domain randomization, asks a designer to explicitly define a distribution over environments that the agent should be robust to. For example, (Peng et al., 2018) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world; (Antonova et al., 2017) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot. Additionally, domain randomization has been successfully used to build accurate object detectors solely from simulated data (Tobin et al., 2017) and to zero-shot transfer a quadcopter flight policy from simulation (Sadeghi & Levine, 2016).
The use of population based training is a standard technique in multi-agent settings. Alphastar, the grandmaster-level Starcraft bot, uses a population of "exploiter" agents that fine-tune against the bot to prevent it from developing exploitable strategies (Vinyals et al., 2019). (Czarnecki et al., 2020) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy. They empirically demonstrate that learning in games can often fail to converge without populations. Finally, Active Domain Randomization (Mehta et al., 2019) is a very close approach to ours, as they use a population
of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions. However, they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward.
3 BACKGROUND
In this work we use the framework of a multi-agent, finite-horizon, discounted, Markov Decision Process (MDP) (Puterman, 1990) defined by a tuple 〈Aagent × Aadversary, S, T , r, γ〉. Here Aagent is the set of actions for the agent, Aadversary is the set of actions for the adversary, S is a set of states, T : Aagent × Aadversary × S → ∆(S) is a transition function, r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor. S is shared between the adversaries as they share a state-space with the agent. The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [∑T t=0 γ tr(st, at)|πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ(st, at−1). We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics (e.g. different values of friction, mass, wind, etc.) and the system dynamics for a given state and action as st+1 ∼ fξ(st, at).
3.1 BASELINES
Here we outline prior work and the approaches that will be compared with RAP. Our baselines consist of a single adversary and domain randomization.
3.1.1 SINGLE MINIMAX ADVERSARY
Our adversary formulation uses the Noisy Action Robust MDP (Tessler et al., 2019) in which the adversary adds its actions onto the agent actions. The objective is
max θ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ]
min φ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ] (1)
where α is a hyperparameter controlling the adversary strength. This is a game in which the adversary and agent play simultaneously. We note an important restriction inherent to this adversarial model. Since the adversary is only able to attack the agent through the actions, there is a restricted class of dynamical systems that it can represent; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in. This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g. perturbing the transition function directly.
3.1.2 DYNAMICS RANDOMIZATION
Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to. This allows the user to directly encode knowledge about the likely deviations between training and testing domains. For example, the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction; they then specify that the agent will be trained with a wide range of possible friction values. We use ξ to denote some vector that parametrizes the set of training environments (e.g. friction, masses, system dynamics, etc.). We denote the domain over which ξ is drawn from as Ξ and use P (Ξ) to denote
some probability distribution over ξ. The domain randomization objective is
max θ Eξ∼P(Ξ)
[ Est+1∼fξ(st,at) [ T∑ t=0 γtr(st, at)|πθ ]] st+1 ∼ fξ(st, at) at ∼ πθ(st)
(2)
Here the goal is to find an agent that performs well on average across the distribution of training environment. Most commonly, and in this work, the parameters ξ are sampled uniformly over Ξ.
4 RAP: ROBUSTNESS VIA ADVERSARY POPULATIONS
RAP extends the minimax objective with a population based approach. Instead of a single adversary, at each rollout we will sample uniformly from a population of adversaries. By using a population, the agent is forced to be robust to a wide variety of potential perturbations rather than a single perturbation. If the agent begins to overfit to any one adversary, this opens up a potential niche for another adversary to exploit. For problems with only one failure mode, we expect the adversaries to all come out identical to the minimax adversary, but as the number of failure modes increases the adversaries should begin to diversify to exploit the agent. To induce this diversity, we will rely on randomness in the gradient estimates and randomness in the initializations of the adversary networks rather than any explicit term that induces diversity.
Denoting π̄φi as the i-th adversary and i ∼ U(1, n) as the discrete uniform distribution defined on 1 through n, the objective becomes
max θ Ei∼U(1,n) [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ]
min φi E [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ] ∀i = 1, . . . , n
st+1 ∼ f(st, at + αāt)
(3)
For a single adversary, this is equivalent to the minimax adversary described in Sec. 3.1.1. This is a game in which the adversary and agent play simultaneously.
We will optimize this objective by converting the problem into the equivalent zero-sum game. At the start of each rollout, we will sample an adversary index from the uniform distribution and collect a trajectory using the agent and the selected adversary. For notational simplicity, we assume the trajectory is of length T and that adversary i will participate in Ji total trajectories while, since the agent participates in every rollout, the agent will receive J total trajectories. We denote the j-th collected trajectory for the agent as τj = (s0, a0, r0, s1) × · · · × (sM , aM , rM , sM+1) and the associated trajectory for adversary i as τ ij = (s0, a0,−r0, s1) × · · · × (sM , aM ,−rM , sM ). Note that the adversary reward is simply the negative of the agent reward. We will use Proximal Policy Optimization (Schulman et al., 2017) (PPO) to update our policies. We caution that we have overloaded notation slightly here and for adversary i, τ ij=1:Ji refers only to the trajectories in which the adversary was selected: adversaries will only be updated using trajectories where they were active.
At the end of a training iteration, we update all our policies using gradient descent. The algorithm is summarized below:
Algorithm 1: Robustness via Adversary Populations Initialize θ, φ1 · · ·φn using Xavier initialization (Glorot & Bengio, 2010); while not converged do
for rollout j=1...J do sample adversary i ∼ U(1, n); run policies πθ, π̄φi in environment until termination; collect trajectories τj , τ ij end update θ, φ1 · · ·φn using PPO (Schulman et al., 2017) and trajectories τj for θ and τ ij for each φi;
end
5 EXPERIMENTS
In this section we present experiments on continuous control tasks from the OpenAI Gym Suite (Brockman et al., 2016; Todorov et al., 2012). We compare with the existing literature and evaluate the efficacy of a population of learned adversaries across a wide range of state and action space sizes. We investigate the following hypotheses:
H1. Agents are more likely to overfit to a single adversary than a population of adversaries, leaving them less robust on in-distribution tasks.
H2. Agents trained against a population of adversaries will generalize better, leading to improved performance on out-of-distribution tasks.
In-distribution tasks refer to the agent playing against perturbations that are in the training distribution: adversaries that add their actions onto the agent. However, the particular form of the adversary and their restricted perturbation magnitude means that there are many dynamical systems that they cannot represent (for example, significant variations of joint mass and friction). These tasks are denoted as out-of-distribution tasks. All of the tasks in the test set described in Sec. 5.1 are likely out-of-distribution tasks.
5.1 EXPERIMENTAL SETUP AND HYPERPARAMETER SELECTION
While we provide exact details of the hyperparameters in the Appendix, adversarial settings require additional complexity in hyperparameter selection. In the standard RL procedure, optimal hyperparameters are selected on the basis of maximum expected cumulative reward. However, if an agent playing against an adversary achieves a large cumulative reward, it is possible that the agent was simply playing against a weak adversary. Conversely, a low score does not necessarily indicate a strong adversary nor robustness: it could simply mean that we trained a weak agent.
To address this, we adopt a version of the train-validate-test split from supervised learning. We use the mean policy performance on a suite of validation tasks to select the hyperparameters, then we train the policy across ten seeds and report the resultant mean and standard deviation over twenty trajectories. Finally, we evaluate the seeds on a holdout test set of eight additional model-mismatch tasks. These tasks vary significantly in difficulty; for visual clarity we report the average across tasks in this paper and report the full breakdown across tasks in the Appendix.
We experiment with the Hopper, Ant, and Half Cheetah continuous control environments used in the original RARL paper Pinto et al. (2017); these are shown in Fig. 1. To generate the validation model mismatch, we pre-define ranges of mass and friction coefficients as follows: for Hopper, mass ∈ [0.7, 1.3] and friction ∈ [0.7, 1.3]; Half Cheetah and Ant, mass ∈ [0.5, 1.5] and friction ∈ [0.1, 0.9]. We scale the friction of every Mujoco geom and the mass of the torso with the same (respective) coefficients. We compare the robustness of agents trained via RAP against: 1) agents trained against a single adversary in a zero-sum game, 2) oracle agents trained using domain randomization, and 3) an agent trained only using PPO and no perturbation mechanism. To train the domain randomization
oracle, at each rollout we uniformly sample a friction and mass coefficient from the validation set ranges. We then scale the friction of all geoms and the mass of the torso by their respective coefficients; this constitutes directly training on the validation set. To generate the test set of model mismatch, we take both the highest and lowest friction coefficients from the validation range and apply them to different combinations of individual geoms. For the exact selected combinations, please refer to the Appendix.
As further validation of the benefits of RAP, we include an additional set of experiments on a continuous control task, a gridworld maze search task, and a Bernoulli Bandit task in Appendix Sec. F. Finally, we note that both our agent and adversary networks are two layer-neural networks with 64 hidden units in each layer and a tanh nonlinearity.
6 RESULTS
H1. In-Distribution Tasks: Analysis of Overfitting A globally minimax optimal adversary should be unexploitable and perform equally well against any adversary of equal strength. We investigate the optimality of our policy by asking whether the minimax agent is robust to swaps of adversaries from different training runs, i.e. different seeds. Fig. 2 shows the result of these swaps for the one adversary and three adversary case. The diagonal corresponds to playing against the adversaries the agent was trained with while every other square corresponds to playing against adversaries from a different seed. To simplify presentation, in the three adversary case, each square is the average performance against all the adversaries from that seed. We observe that the agent trained against three adversaries (top row right) is robust under swaps while the single adversary case is not (top row left). The agent trained against a single adversary is highly exploitable, as can be seen by its extremely sub-par performance against an adversary from any other seed. Since the adversaries off-diagonal are feasible adversaries, this suggests that we have found a poor local optimum of the objective.
In contrast, the three adversary case is generally robust regardless of which adversary it plays against, suggesting that the use of additional adversaries has made the agent more robust. One possible hypothesis for why this could be occurring is that the adversaries in the "3 adversary" case are somehow weaker than the adversaries in the "1 adversary" case. The middle row of the figure shows that it is not the case that the improved performance of the agent playing against the three adversaries is due to some weakness of the adversaries. If anything, the adversaries from the three adversary case are stronger as the agent trained against 1 adversary does extremely poorly playing against the three adversaries (left) whereas the agent trained against three adversaries still performs well when playing against the adversaries from the single-adversary runs. Finally, the bottom row investigates how an agent trained with domain randomization fairs against adversaries from either training regimes. In neither case is the domain randomization agent robust on these tasks.
H2. Out-of-Distribution Tasks: Robustness and Generalization of Population Training
Here we present the results from the validation and holdout test sets described in Section 5.1. We compare the performance of training with adversary populations of size three and five against vanilla PPO, the domain randomization oracle, and the single minimax adversary. We refer to domain randomization as an oracle as it is trained directly on the test distribution.
Fig.6 shows the average reward (the average of ten seeds across the validation or test sets respectively) for each environment. Table 1 gives the corresponding numerical values and the percent change of each policy from the baseline. Standard deviations are omitted on the test set due to wide variation in task difficulty; the individual tests that we aggregate here are reported in the Appendix with
appropriate error bars. In all environments we achieve a higher reward across both the validation and holdout test set using RAP of size three and/or five when compared to the single minimax adversary case. These results from testing on new environments with altered dynamics supports hypothesis H2. that training with a population of adversaries leads to more robust policies than training with a single adversary in out-of-distribution tasks. Furthermore, while the performance is only comparable with the domain randomization oracle, the adversarial approach does not require prior engineering of appropriate randomizations. Furthermore, despite domain randomization being trained directly on these out-of-distribution tasks, domain randomization can have serious failure modes of domain randomization due to its formulation. A detailed analysis of this can be found in Appendix E.
For a more detailed comparison of robustness across the validation set, Fig. 4 shows heatmaps of the performance across all the mass, friction coefficient combinations. Here we highlight the heatmaps for Hopper and Half Cheetah for vanilla PPO, domain randomization oracle, single adversary, and best adversary population size. Additional heatmaps for other adversary population sizes and the Ant environment can be found in the Appendix. Note that Fig. 4 is an example of a case where a single adversary has negligible effect on or slightly reduces the performance of the resultant policy on the
validation set. This supports our hypothesis that a single adversary can actually lower the robustness of an agent.
7 CONCLUSIONS AND FUTURE WORK
In this work we demonstrate that the use of a single adversary to approximate the solution to a minimax problem does not consistently lead to improved robustness. We propose a solution through the use of multiple adversaries (RAP), and demonstrate that this provides robustness across a variety of robotics benchmarks. We also compare RAP with domain randomization and demonstrate that while DR can lead to a more robust policy, it requires careful parametrization of the domain we sample from to ensure robustness. RAP does not require this tuning, allowing for use in domains where appropriate tuning requires extensive prior knowledge or expertise.
There are several open questions stemming from this work. While we empirically demonstrate the effects of RAP, we do not have a compelling theoretical understanding of why multiple adversaries are helping. Perhaps RAP helps approximate a mixed Nash equilibrium as discussed in Sec. 1 or
perhaps population based training increases the likelihood that one of the adversaries is strong? Would the benefits of RAP disappear if a single adversary had the ability to represent mixed Nash?
There are some extensions of this work that we would like to pursue. We have looked at the robustness of our approach in simulated settings; future work will examine whether this robustness transfers to real-world settings. Additionally, our agents are currently memory-less and therefore cannot perform adversary identification; perhaps memory leads to a system-identification procedure that improves transfer performance. Our adversaries can also be viewed as forming a task distribution, allowing them to be used in continual learning approaches like MAML (Nagabandi et al., 2018) where domain randomization is frequently used to construct task distributions.
A FULL DESCRIPTION OF THE CONTINUOUS CONTROL MDPS
We use the Mujoco ant, cheetah, and hopper environments as a test of the efficacy of our strategy versus the 0 adversary, 1 adversary, and domain randomization baselines. We use the Noisy Action Robust MDP formulation Tessler et al. (2019) for our adversary parametrization. If the normal system dynamics are
sk+1 = sk + f(sk, ak)∆t
the system dynamics under the adversary are
sk+1 = sk + f(sk, ak + a adv k )∆t
where aadvk is the adversary action at time k.
The notion here is that the adversary action is passed through the dynamics function and represents some additional set of dynamics. It is standard to clip actions within some boundary but for the above reason, we clip the agent and adversary actions separately. Otherwise, an agent would be able to limit the effect of the adversary by always taking actions at the bounds of its clipping range. The agent is clipped between [−1, 1] in the Hopper environment and the adversary is clipped between [−.25, .25]. The MDP through which we train the agent policy is characterized by the following states, actions, and rewards:
• sagentt = [ot, at] where ot is an observation returned by the environment, and at is the action taken by the agent.
• We use the standard rewards provided by the OpenAI Gym Mujoco environments at https: //github.com/openai/gym/tree/master/gym/envs/mujoco. For the exact functions, please refer to the code at ANONYMIZED. • aagentt ∈ [amin, amax] n.
The MDP for adversary i is the following:
• st = sagentt . The adversary sees the same states as the agent. • The adversary reward is the negative of the agent reward. • aadvt ∈ [ aadvmin, a adv max ]n .
For our domain randomization Hopper baseline, we use the following randomization: at each rollout, we scale the friction of all joints by a single value uniformly sampled from [0.7, 1.3]. We also randomly scale the mass of the ’torso’ link by a single value sampled from [0.7, 1.3]. For Half-Cheetah and Ant the range for friction is [0.1, 0.9] and for mass the range is [0.5, 1.5].
B INCREASING ADVERSARY POOL SIZE
We investigate whether RAP is robust to adversary number as this would be a useful property to minimize hyperparameter search. Here we hypothesize that while having more adversaries can represent a wider range of dynamics to learn to be robust to, we expect there to be diminishing returns due to the decreased batch size that each adversary receives (total number of environment steps is held constant across all training variations). We expect decreasing batch size to lead to worse agent policies since the batch will contain under-trained adversary policies. We cap the number of adversaries at eleven as our machines ran out of memory at this value. We run ten seeds for every adversary value and Fig. 5 shows the results for Hopper. Agent robustness on the test set increases monotonically up to three adversaries and roughly begins to decrease after that point. This suggests that a trade-off between adversary number and performance exists although we do not definitively show that diminishing batch sizes is the source of this trade-off. However, we observe in Fig. 6 that both three and five adversaries perform well across all studied Mujoco domains.
C HOLDOUT TESTS
In this section we describe in detail all of the holdout tests used.
C.1 HOPPER
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the hopper ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 1.3. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.7. The body geoms and their names are visible in Fig. 7.
The exact combinations and the corresponding test name are indicated in Table 2 for Hopper.
C.2 CHEETAH
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the
cheetah ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1. The body geoms and their names are visible in Fig. 8.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
C.3 ANT
We will use torso to indicate the head piece, leg to refer to one of the four legs that contact the ground, and ’aux’ to indicate the geom that connects the leg to the torso. Since the ant is symmetric we adopt a convention that two of the legs are front-left and front-right and two legs are back-left and back-right. Fig. 9 depicts the convention. For the Mujoco holdout transfer tests we pick a subset of the ant ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
D RESULTS
Here we recompute the values of all the results and display them with appropriate standard deviations in tabular form.
There was not space for the ant validation set results so they are reproduced here.
E CHALLENGES OF DOMAIN RANDOMIZATION
In our experiments, we find that naive parametrization of domain randomization can result in a brittle policy, even when evaluated on the same distribution it was trained on.
Effect of Domain Randomization Parametrization
From Fig. 6, we see that in the Ant and Hopper domains, the DR oracle achieves the highest transfer reward in the validation set as expected since the DR oracle is trained directly on the validation set. Interestingly, we found that the domain randomization policy performed much worse on the Half Cheetah environment, despite having access to the mass and friction coefficients during training. Looking at the performance for each mass and friction combination in Fig. 11, we found that the DR agent was able to perform much better at the low friction coefficients and learned to prioritize those values at the cost of significantly worse performance on average. This highlights a potential issue with domain randomization: while training across a wide variety of dynamics parameters can increase robustness, naive parametrizations can cause the policy to exploit subsets of the randomized domain and lead to a brittle policy. This is a problem inherent to the expectation across domains that is used in domain randomization; if some subset of randomizations have sufficiently high reward the agent will prioritize performance on those at the expense of robustness.
We hypothesize that this is due to the DR objective in Eq. 2 optimizing in expectation over the sampling range. To test this, we created a separate range of ‘good’ friction parameters [0.5, 1.5] and compared the robustness of a DR policy trained with ‘good‘ range against a DR policy trained with ‘bad’ range [0.1, 0.9] in Fig. 11. Here we see that a ‘good’ parametrization leads to the expected result where domain randomization is the most robust. We observe that domain randomization underperforms adversarial training on the validation set despite the validation set literally constituting the training set for domain randomization. This suggests that underlying optimization difficulties caused by significant variations in reward scaling are partially to blame for the poor performance of domain randomization. Notably, the adversary-based methods are not susceptible to the same parametrization issues.
Alternative DR policy architecture
As discussed above and also identified in Rajeswaran et al. (2016), the expectation across randomizations that is used in domain randomization causes it to prioritize a policy that performs well in a high-reward subset of the randomization domains. This is harmless when domain randomization is used for randomizations of state, such as color, where all the randomization environments have the same expected reward, but has more pernicious effects in dynamics randomizations. Consider a set of N randomization environments, N − 1 of which have reward Rlow and one of which has
has reward Rhigh where Rhigh >> Rlow. If the agent cannot identify which of the randomization environments it is in, the intuitively optimal solution is to pick the policy that optimizes the high reward environment. One possible way out of the quandary is to use an agent that has some memory, such as an LSTM-based policy, thus giving the possibility of identifying which environment the agent is in and deploying the appropriate response. However, if Rhigh is sufficiently large and there is some reduction in reward associated with performing the system-identification necessary to identify the randomization, then the agent will not perform the system identification and will prioritize achieving Rhigh. As an illustration of this challenge, Fig. 12 compares the results of domain randomization on the half-cheetah environment with and without memory. In the memory case, we use a 64 unit LSTM. As can be seen, there is an improvement in the ability of the domain randomized policy to perform well on the full range of low-friction / high mass values, but the improved performance does not extend to higher friction values. In fact, the performance contrast is enhanced even further as the policy does a good deal worse on the high friction values than the case without memory.
F ADDITIONAL EXPERIMENTS
Here we outline a few more experiments we ran that demonstrate the value of additional adversaries. We run the following tasks:
F.1 DEEPMIND CONTROL CATCH
This task uses the same Markov Decision Process described in Sec. A. The challenge (Tassa et al., 2020), pictured in Fig. 13, is to get the ball to fall inside the cup. As in the other continuous control
tasks, we apply the adversary to the actions of the agents (which is controlling the cup). We then test on variations of the mass of both the ball and the cup. The heatmaps for this task are presented in Fig. 14 where the 3 adversary case provides a slight improvement in the robustness region relative to the 1 adversary case.
F.2 MULTI-ARMED BERNOULLI BANDITS
As an illustrative example, we examine a multi-armed stochastic bandit, a problem widely studied in reinforcement learning literature. Generally, successful strategies for multi-arm bandit problems involve successfully balancing the exploration across arms and exploiting the ’best’ arm. A "robust" strategy should have vanishing regret as the time horizon goes to infinity. We construct a 10-armed bandit where each arm i is parametrized by a value p where p is the probability of that arm returning
1. The goal of the agent is to minimize total cumulative regret Rn over a horizon of n steps:
Rn = nmax i µi − E [ n∑ t=0 at ] where at corresponds to picking a particular arm. At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs padded with zeros to keep the length fixed. The adversary has a horizon of 1; at time-step zero it receives an observation of 0 and outputs the probability for each arm. At the termination of the horizon the adversary receives the negative of the cumulative agent reward. For our domain randomization baseline we use uniform sampling of the p value for each arm. We chose a horizon length of T = 100 steps. The MDP of the agent is characterized as follows:
• st = [ 0n∗(T−t)×1, rt, at, rt−1, at−1, , . . . , r0, a0 ] • rt = X(ai)−maxi µi • aagentt ∈ 0 . . . 9
At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs. The buffer matching the horizon length is padded with zeros. For each training step, the agent receives a reward of the negative expected regret. We set up the adversary problem as an MDP with a horizon of 1.
• st = [0.0] • r = − ∑T i=1 rt • aadv ∈ [0, 1]10
During adversarial training, we sample a random adversary at the beginning of each rollout, and allow it to pick 10 p values that are then shuffled randomly and then assigned to each arm (this is to prevent the agent from deterministically knowing which arm has which p value). The adversary is always given an observation of a vector of zeros and is rewarded once at the end of the rollout. We also construct a hold-out test of two bandit examples which we colloquially refer to as "evenly spread" and "one good arm." In "evenly spread", the arms, going from 1 to 10 have evenly spaced probabilities in steps of 0.1 0, 0.1, 0.2, 0.3, . . . 0.8, 0.9. In "one good arm" 9 arms have probability 0.1 and one arm has probability 0.9. As our policy for the agent, we use a Gated Recurrent Unit network with hidden size 256.
An interesting feature of the bandit task is that it makes clear that the single adversary approach corresponds to training on a single, adversarially constructed bandit instance. Surprisingly, as indicated in Fig. 15, this does not perform terribly on our two holdout tasks. However, there is a clear improvement on both tasks in the four adversary case. All adversarial approaches outperform an Upper Confidence Bound-based expert (shown in red). Interestingly, domain randomization, which had superficially good reward at training time, completely fails on the "one good arm" holdout task. This suggests another possible failure mode of domain randomization where in high dimensions uniform sampling may just fail to yield interesting training tasks. Finally, we note that since the upper confidence approach only tries to minimize regret asymptotically, our outperforming it may simply be due to our relatively short horizon; we simply provide it as a baseline.
G COST AND HYPERPARAMETERS
Here we reproduce the hyperparameters we used in each experiment and compute the expected runtime and cost of each experiment. Numbers indicated in {} were each used for one run. Otherwise the parameter was kept fixed at the indicated value.
G.1 HYPERPARAMETERS
For Mujoco the hyperparameters are:
• Learning rate:
– {.0003, .0005} for half cheetah – {.0005, .00005} for hopper and ant
• Generalized Advantage Estimation λ – {0.9, 0.95, 1.0} for half cheetah – {0.5, 0.9, 1.0} for hopper and ant
• Discount factor γ = 0.995 • Training batch size: 100000 • SGD minibatch size: 640 • Number of SGD steps per iteration: 10 • Number of iterations: 700 • We set the seed to 0 for all hyperparameter runs. • The maximum horizon is 1000 steps.
For the validation across seeds we used 10 seeds ranging from 0 to 9. All other hyperparameters are the default values in RLlib Liang et al. (2017) 0.8.0
G.2 COST
For all of our experiments we used AWS EC2 c4.8xlarge instances which come with 36 virtual CPUs. For the Mujoco experiments, we use 2 nodes and 11 CPUs per hyper-parameter, leading to one full hyper-parameter sweep fitting onto the 72 CPUs. We run the following set of experiments and ablations, each of which takes 8 hours.
• 0 adversaries • 1 adversary • 3 adversaries • 5 adversaries • Domain randomization
for a total of 5 experiments for each of Hopper, Cheetah, Ant. For the best hyperparameters and each experiment listed above we run a seed search with 6 CPUs used per-seed, a process which takes about 12 hours. This leads to a total of 2 ∗ 8 ∗ 5 ∗ 3 + 2 ∗ 12 ∗ 3 ∗ 5 = 600 node hours and 36 ∗ 600 ≈ 22000
CPU hours. At a cost of ≈ 0.3 dollars per node per hour for EC2 spot instances, this gives ≈ 180 dollars to fully reproduce our results for this experiment. If the chosen hyperparameters are used and only the seeds are sweep, this is ≈ 100 dollars.
G.3 RUN TIME AND SAMPLE COMPLEXITY
Here we briefly analyze the expected run-time of our algorithms. While there is an additional cost for adding a single adversary equal to the sum of the cost of computing gradients at train time and actions at run-time for an additional agent, there is no additional cost for adding additional adversaries. Since we divide the total set of samples per iteration amongst the adversaries, we compute approximately the same number of gradients and actions in the many-adversary case as we do in the single adversary case. In Fig. 16 plot of reward vs. wall-clock time supports this argument: the 0 adversary case runs the fastest but all the different adversary numbers complete 700 iterations of training in approximately the same amount of time. Additionally, Fig. 17 demonstrates that there is some variation in sample complexity but the trend is not consistent across adversary number.
G.4 CODE
Our code is available at ANONYMIZED. For our reinforcement learning code-base we used RLlib Liang et al. (2017) version 0.8.0 and did not make any custom modifications to the library.
H PURE NASH EQUILIBRIA DO NOT NECESSARILY EXIST
While there are canonical examples of games in which pure Nash equilibria do not exist such as rock-paper-scissors, we are not aware one for sequential games with continuous actions. Tessler et al. (2019) contains an example of a simple, horizon 1 MDP where duality is not satisfied. The pure minimax solution does not equal the value of the pure maximin solution and a greater value can be achieved by randomizing one of the policies showing that there is no pure equilibrium. | 1. What is the main contribution of the paper regarding reinforcement learning?
2. What are the strengths and weaknesses of the proposed algorithm compared to prior works?
3. Do you have any concerns or misunderstandings regarding the motivation and explanation of the paper?
4. How does the reviewer assess the effectiveness and robustness of the proposed algorithm in different domains and scenarios?
5. Are there any suggestions for further experiments or improvements to enhance the performance of the algorithm? | Review | Review
This paper proposes an algorithm to improve the robustness of reinforcement learning. The algorithm , RAP, combines ideas from domain randomization and adversarial training. Specifically, during learning, it trains an ensemble of adversary to attack the learner, with the hope that the learner can be robust to various situations. The experimental results show the proposed algorithm indeed outperform the respective baselines here (single-adversary training and domain randomization) in its ability to generalize the other test domains.
I think overall the proposed algorithm presents a simple and nice way to join the strengths of the adversary training and domain randomization. Indeed, we can view single-adversary training and domain randomization as special cases of RAP, which either uses a single adversary or not perform any update on the adversaries. For this, I would imagine RAP can be quite effective in practice.
Issue about motivation and explaination
However, despite this nice design, I think the main motivation and the explanation why the proposed algorithm works are not completely correct. In introduction, the authors motivate the use of multiple adversary from that pure Nash equilibrium does not always exist in zero sum two player games. However, adversary training is not necessarily about solving the Nash equilibrium (which aims to find solutions such that minimax = maximin) but rather solving a maximin "only", whose solution is always well defined.
The motivating robot example is also quite misleading. In that case, the failure is due to the learner policy is not even solving maximin problem. Note in maximin, the adversary is chosen after knowing the learner's policy. This robot example is rather saying that minimax, where the learner optimizes after the adversary is chosen, is insufficient to generate robust behavior. However, this is not what adversarial training is about.
I think, it's probably because of this misunderstanding, the authors motivate the issue of the single-adversary training as the agent would overfit to the single adversary. Again this is due to incorrectly interpreting maximin in (1) as minimax.
Nonetheless, I do believe the proposed algorithm is effective in practice, but for a different reason from what the authors explain. I think the RAP does improve upon single-adversary training. Because it uses multiple randomly initialized adversarial policies, it may have a higher chance to overcome the non-convexity issue in the min part of maxmin and therefore has a higher chance to find the maximin solution. In other words, the failure of the single-adversary implementation is most due to optimization difficulty not that the solution concept is incorrect. And the proposed algorithm is more of an optimization heuristic to better approximate the maximin problem (which actually is quite commonly used in the optimization literature). In fact, when given the learner policy, I believe by further taking the min among the multiple adversaries here and uses that for the learner update (i.e. the leaner would not use trajectories from all but the worst one), the robustness of the algorithm might further improve.
Experiments
Given this work is lacking theoretical insights, I expect more experiments to be done to verify the proposed algorithm. The current paper only test RAP on mujoco environments. I think for understanding how well this algorithm works, the authors should test the algorithm on a more tasks with diverse characteristics (e.g. tabular, video games, tabular, traffic,) rather just the continuous robotics control domain.
In figure 2, I think a fairer comparison should let agents of both sides face the same adversary. Nonetheless, I agree that the performance plots later on are sufficient to show that RAP is better.
Lastly I like the in-depth discussion about the failure of domain randomization in halfcheetah, as it's also my main question when reading the previous part of the paper. However, I'm wondering if the bad performance is due to that the learner's policy is not expressive enough. I cannot find the description of the exact architecture used in the paper, but can you try training with a more expressive policy and see if domain randomization still perform worse than the direct training without the adversary? |
ICLR | Title
Robust Reinforcement Learning using Adversarial Populations
Abstract
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribution as the solution to a zero-sum minimax game. However, existing work on learning solutions to the Robust RL formulation has primarily focused on training a single RL agent against a single adversary. In this work, we demonstrate that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary; the resulting policy is highly exploitable by new adversaries. We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training. We empirically validate across robotics benchmarks that the use of an adversarial population results in a less exploitable, more robust policy. Finally, we demonstrate that this approach provides comparable robustness and generalization as domain randomization on these benchmarks while avoiding a ubiquitous domain randomization failure mode.
1 INTRODUCTION
Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering. The complexity of the physical world means that the models used to design controllers are often inaccurate. Optimization based control design approaches, such as reinforcement learning (RL), have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch. In this work, we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics.
An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics (Tessler et al., 2019; Kamalaruban et al., 2020; Pinto et al., 2017). If a global Nash equilibrium of this problem is found, then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations. Besides the benefit of removing user design once the perturbation mechanism is specified, this approach is maximally conservative, which is useful for safety critical applications.
However, the literature on learning an adversary predominantly uses a single, stochastic adversary. This raises a puzzling question: the zero-sum game does not necessarily have any pure Nash equilibria (see Appendix C in Tessler et al. (2019)) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria. That is, the most general form of the minimax problem searches over distributions of adversary and agent policies, however, this problem is approximated in the literature by a search for a single agent-adversary pair. We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy.
The following example provides some intuition for why using a single adversary can decrease robustness. Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south. For a fixed, deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state. Once the adversary is removed, the robot will still apply the compensatory forces and
possibly become unstable. Stochastic Gaussian policies (ubiquitous in continuous control) offer little improvement: they cannot represent multi-modal perturbations. Under these standard policy parametrizations, we cannot use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south. This leaves the agent exploitable to this class of perturbations.
The use of a single adversary in the robustness literature is in contrast to the multi-player game literature. In multi-player games, large sets of adversaries are used to ensure that an agent cannot easily be exploited (Vinyals et al., 2019; Czarnecki et al., 2020; Brown & Sandholm, 2019). Drawing inspiration from this literature, we introduce RAP (Robustness via Adversary Populations): a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent. Returning to our example of a robot perturbed by wind, if the robot learns to cancel the north wind effectively, then that opens a niche for an adversary to exploit by applying forces in another direction. With a population, we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over.
Our contributions are as follows:
• Using a set of continuous robotics control tasks, we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples.
• We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries.
• We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization.
2 RELATED WORK
This work builds upon robust control (Zhou & Doyle, 1998), a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics. The Robust Markov Decision Process (R-MDP) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small, tabular MDPs (Nilim & El Ghaoui, 2005; Lim et al., 2013). For larger or continuous MDPs, one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem (Tamar et al., 2014).
One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective. Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning (RARL)(Pinto et al., 2017) and Noisy Robust Markov Decision Processes (NR-MDP) (Tessler et al., 2019) which differ in how they parametrize the adversaries: RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action. Both of these works attempt to find an equilibrium of the minimax objective using a single adversary; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary.
A strong alternative to the minimax objective, domain randomization, asks a designer to explicitly define a distribution over environments that the agent should be robust to. For example, (Peng et al., 2018) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world; (Antonova et al., 2017) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot. Additionally, domain randomization has been successfully used to build accurate object detectors solely from simulated data (Tobin et al., 2017) and to zero-shot transfer a quadcopter flight policy from simulation (Sadeghi & Levine, 2016).
The use of population based training is a standard technique in multi-agent settings. Alphastar, the grandmaster-level Starcraft bot, uses a population of "exploiter" agents that fine-tune against the bot to prevent it from developing exploitable strategies (Vinyals et al., 2019). (Czarnecki et al., 2020) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy. They empirically demonstrate that learning in games can often fail to converge without populations. Finally, Active Domain Randomization (Mehta et al., 2019) is a very close approach to ours, as they use a population
of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions. However, they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward.
3 BACKGROUND
In this work we use the framework of a multi-agent, finite-horizon, discounted, Markov Decision Process (MDP) (Puterman, 1990) defined by a tuple 〈Aagent × Aadversary, S, T , r, γ〉. Here Aagent is the set of actions for the agent, Aadversary is the set of actions for the adversary, S is a set of states, T : Aagent × Aadversary × S → ∆(S) is a transition function, r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor. S is shared between the adversaries as they share a state-space with the agent. The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [∑T t=0 γ tr(st, at)|πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ(st, at−1). We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics (e.g. different values of friction, mass, wind, etc.) and the system dynamics for a given state and action as st+1 ∼ fξ(st, at).
3.1 BASELINES
Here we outline prior work and the approaches that will be compared with RAP. Our baselines consist of a single adversary and domain randomization.
3.1.1 SINGLE MINIMAX ADVERSARY
Our adversary formulation uses the Noisy Action Robust MDP (Tessler et al., 2019) in which the adversary adds its actions onto the agent actions. The objective is
max θ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ]
min φ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ] (1)
where α is a hyperparameter controlling the adversary strength. This is a game in which the adversary and agent play simultaneously. We note an important restriction inherent to this adversarial model. Since the adversary is only able to attack the agent through the actions, there is a restricted class of dynamical systems that it can represent; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in. This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g. perturbing the transition function directly.
3.1.2 DYNAMICS RANDOMIZATION
Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to. This allows the user to directly encode knowledge about the likely deviations between training and testing domains. For example, the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction; they then specify that the agent will be trained with a wide range of possible friction values. We use ξ to denote some vector that parametrizes the set of training environments (e.g. friction, masses, system dynamics, etc.). We denote the domain over which ξ is drawn from as Ξ and use P (Ξ) to denote
some probability distribution over ξ. The domain randomization objective is
max θ Eξ∼P(Ξ)
[ Est+1∼fξ(st,at) [ T∑ t=0 γtr(st, at)|πθ ]] st+1 ∼ fξ(st, at) at ∼ πθ(st)
(2)
Here the goal is to find an agent that performs well on average across the distribution of training environment. Most commonly, and in this work, the parameters ξ are sampled uniformly over Ξ.
4 RAP: ROBUSTNESS VIA ADVERSARY POPULATIONS
RAP extends the minimax objective with a population based approach. Instead of a single adversary, at each rollout we will sample uniformly from a population of adversaries. By using a population, the agent is forced to be robust to a wide variety of potential perturbations rather than a single perturbation. If the agent begins to overfit to any one adversary, this opens up a potential niche for another adversary to exploit. For problems with only one failure mode, we expect the adversaries to all come out identical to the minimax adversary, but as the number of failure modes increases the adversaries should begin to diversify to exploit the agent. To induce this diversity, we will rely on randomness in the gradient estimates and randomness in the initializations of the adversary networks rather than any explicit term that induces diversity.
Denoting π̄φi as the i-th adversary and i ∼ U(1, n) as the discrete uniform distribution defined on 1 through n, the objective becomes
max θ Ei∼U(1,n) [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ]
min φi E [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ] ∀i = 1, . . . , n
st+1 ∼ f(st, at + αāt)
(3)
For a single adversary, this is equivalent to the minimax adversary described in Sec. 3.1.1. This is a game in which the adversary and agent play simultaneously.
We will optimize this objective by converting the problem into the equivalent zero-sum game. At the start of each rollout, we will sample an adversary index from the uniform distribution and collect a trajectory using the agent and the selected adversary. For notational simplicity, we assume the trajectory is of length T and that adversary i will participate in Ji total trajectories while, since the agent participates in every rollout, the agent will receive J total trajectories. We denote the j-th collected trajectory for the agent as τj = (s0, a0, r0, s1) × · · · × (sM , aM , rM , sM+1) and the associated trajectory for adversary i as τ ij = (s0, a0,−r0, s1) × · · · × (sM , aM ,−rM , sM ). Note that the adversary reward is simply the negative of the agent reward. We will use Proximal Policy Optimization (Schulman et al., 2017) (PPO) to update our policies. We caution that we have overloaded notation slightly here and for adversary i, τ ij=1:Ji refers only to the trajectories in which the adversary was selected: adversaries will only be updated using trajectories where they were active.
At the end of a training iteration, we update all our policies using gradient descent. The algorithm is summarized below:
Algorithm 1: Robustness via Adversary Populations Initialize θ, φ1 · · ·φn using Xavier initialization (Glorot & Bengio, 2010); while not converged do
for rollout j=1...J do sample adversary i ∼ U(1, n); run policies πθ, π̄φi in environment until termination; collect trajectories τj , τ ij end update θ, φ1 · · ·φn using PPO (Schulman et al., 2017) and trajectories τj for θ and τ ij for each φi;
end
5 EXPERIMENTS
In this section we present experiments on continuous control tasks from the OpenAI Gym Suite (Brockman et al., 2016; Todorov et al., 2012). We compare with the existing literature and evaluate the efficacy of a population of learned adversaries across a wide range of state and action space sizes. We investigate the following hypotheses:
H1. Agents are more likely to overfit to a single adversary than a population of adversaries, leaving them less robust on in-distribution tasks.
H2. Agents trained against a population of adversaries will generalize better, leading to improved performance on out-of-distribution tasks.
In-distribution tasks refer to the agent playing against perturbations that are in the training distribution: adversaries that add their actions onto the agent. However, the particular form of the adversary and their restricted perturbation magnitude means that there are many dynamical systems that they cannot represent (for example, significant variations of joint mass and friction). These tasks are denoted as out-of-distribution tasks. All of the tasks in the test set described in Sec. 5.1 are likely out-of-distribution tasks.
5.1 EXPERIMENTAL SETUP AND HYPERPARAMETER SELECTION
While we provide exact details of the hyperparameters in the Appendix, adversarial settings require additional complexity in hyperparameter selection. In the standard RL procedure, optimal hyperparameters are selected on the basis of maximum expected cumulative reward. However, if an agent playing against an adversary achieves a large cumulative reward, it is possible that the agent was simply playing against a weak adversary. Conversely, a low score does not necessarily indicate a strong adversary nor robustness: it could simply mean that we trained a weak agent.
To address this, we adopt a version of the train-validate-test split from supervised learning. We use the mean policy performance on a suite of validation tasks to select the hyperparameters, then we train the policy across ten seeds and report the resultant mean and standard deviation over twenty trajectories. Finally, we evaluate the seeds on a holdout test set of eight additional model-mismatch tasks. These tasks vary significantly in difficulty; for visual clarity we report the average across tasks in this paper and report the full breakdown across tasks in the Appendix.
We experiment with the Hopper, Ant, and Half Cheetah continuous control environments used in the original RARL paper Pinto et al. (2017); these are shown in Fig. 1. To generate the validation model mismatch, we pre-define ranges of mass and friction coefficients as follows: for Hopper, mass ∈ [0.7, 1.3] and friction ∈ [0.7, 1.3]; Half Cheetah and Ant, mass ∈ [0.5, 1.5] and friction ∈ [0.1, 0.9]. We scale the friction of every Mujoco geom and the mass of the torso with the same (respective) coefficients. We compare the robustness of agents trained via RAP against: 1) agents trained against a single adversary in a zero-sum game, 2) oracle agents trained using domain randomization, and 3) an agent trained only using PPO and no perturbation mechanism. To train the domain randomization
oracle, at each rollout we uniformly sample a friction and mass coefficient from the validation set ranges. We then scale the friction of all geoms and the mass of the torso by their respective coefficients; this constitutes directly training on the validation set. To generate the test set of model mismatch, we take both the highest and lowest friction coefficients from the validation range and apply them to different combinations of individual geoms. For the exact selected combinations, please refer to the Appendix.
As further validation of the benefits of RAP, we include an additional set of experiments on a continuous control task, a gridworld maze search task, and a Bernoulli Bandit task in Appendix Sec. F. Finally, we note that both our agent and adversary networks are two layer-neural networks with 64 hidden units in each layer and a tanh nonlinearity.
6 RESULTS
H1. In-Distribution Tasks: Analysis of Overfitting A globally minimax optimal adversary should be unexploitable and perform equally well against any adversary of equal strength. We investigate the optimality of our policy by asking whether the minimax agent is robust to swaps of adversaries from different training runs, i.e. different seeds. Fig. 2 shows the result of these swaps for the one adversary and three adversary case. The diagonal corresponds to playing against the adversaries the agent was trained with while every other square corresponds to playing against adversaries from a different seed. To simplify presentation, in the three adversary case, each square is the average performance against all the adversaries from that seed. We observe that the agent trained against three adversaries (top row right) is robust under swaps while the single adversary case is not (top row left). The agent trained against a single adversary is highly exploitable, as can be seen by its extremely sub-par performance against an adversary from any other seed. Since the adversaries off-diagonal are feasible adversaries, this suggests that we have found a poor local optimum of the objective.
In contrast, the three adversary case is generally robust regardless of which adversary it plays against, suggesting that the use of additional adversaries has made the agent more robust. One possible hypothesis for why this could be occurring is that the adversaries in the "3 adversary" case are somehow weaker than the adversaries in the "1 adversary" case. The middle row of the figure shows that it is not the case that the improved performance of the agent playing against the three adversaries is due to some weakness of the adversaries. If anything, the adversaries from the three adversary case are stronger as the agent trained against 1 adversary does extremely poorly playing against the three adversaries (left) whereas the agent trained against three adversaries still performs well when playing against the adversaries from the single-adversary runs. Finally, the bottom row investigates how an agent trained with domain randomization fairs against adversaries from either training regimes. In neither case is the domain randomization agent robust on these tasks.
H2. Out-of-Distribution Tasks: Robustness and Generalization of Population Training
Here we present the results from the validation and holdout test sets described in Section 5.1. We compare the performance of training with adversary populations of size three and five against vanilla PPO, the domain randomization oracle, and the single minimax adversary. We refer to domain randomization as an oracle as it is trained directly on the test distribution.
Fig.6 shows the average reward (the average of ten seeds across the validation or test sets respectively) for each environment. Table 1 gives the corresponding numerical values and the percent change of each policy from the baseline. Standard deviations are omitted on the test set due to wide variation in task difficulty; the individual tests that we aggregate here are reported in the Appendix with
appropriate error bars. In all environments we achieve a higher reward across both the validation and holdout test set using RAP of size three and/or five when compared to the single minimax adversary case. These results from testing on new environments with altered dynamics supports hypothesis H2. that training with a population of adversaries leads to more robust policies than training with a single adversary in out-of-distribution tasks. Furthermore, while the performance is only comparable with the domain randomization oracle, the adversarial approach does not require prior engineering of appropriate randomizations. Furthermore, despite domain randomization being trained directly on these out-of-distribution tasks, domain randomization can have serious failure modes of domain randomization due to its formulation. A detailed analysis of this can be found in Appendix E.
For a more detailed comparison of robustness across the validation set, Fig. 4 shows heatmaps of the performance across all the mass, friction coefficient combinations. Here we highlight the heatmaps for Hopper and Half Cheetah for vanilla PPO, domain randomization oracle, single adversary, and best adversary population size. Additional heatmaps for other adversary population sizes and the Ant environment can be found in the Appendix. Note that Fig. 4 is an example of a case where a single adversary has negligible effect on or slightly reduces the performance of the resultant policy on the
validation set. This supports our hypothesis that a single adversary can actually lower the robustness of an agent.
7 CONCLUSIONS AND FUTURE WORK
In this work we demonstrate that the use of a single adversary to approximate the solution to a minimax problem does not consistently lead to improved robustness. We propose a solution through the use of multiple adversaries (RAP), and demonstrate that this provides robustness across a variety of robotics benchmarks. We also compare RAP with domain randomization and demonstrate that while DR can lead to a more robust policy, it requires careful parametrization of the domain we sample from to ensure robustness. RAP does not require this tuning, allowing for use in domains where appropriate tuning requires extensive prior knowledge or expertise.
There are several open questions stemming from this work. While we empirically demonstrate the effects of RAP, we do not have a compelling theoretical understanding of why multiple adversaries are helping. Perhaps RAP helps approximate a mixed Nash equilibrium as discussed in Sec. 1 or
perhaps population based training increases the likelihood that one of the adversaries is strong? Would the benefits of RAP disappear if a single adversary had the ability to represent mixed Nash?
There are some extensions of this work that we would like to pursue. We have looked at the robustness of our approach in simulated settings; future work will examine whether this robustness transfers to real-world settings. Additionally, our agents are currently memory-less and therefore cannot perform adversary identification; perhaps memory leads to a system-identification procedure that improves transfer performance. Our adversaries can also be viewed as forming a task distribution, allowing them to be used in continual learning approaches like MAML (Nagabandi et al., 2018) where domain randomization is frequently used to construct task distributions.
A FULL DESCRIPTION OF THE CONTINUOUS CONTROL MDPS
We use the Mujoco ant, cheetah, and hopper environments as a test of the efficacy of our strategy versus the 0 adversary, 1 adversary, and domain randomization baselines. We use the Noisy Action Robust MDP formulation Tessler et al. (2019) for our adversary parametrization. If the normal system dynamics are
sk+1 = sk + f(sk, ak)∆t
the system dynamics under the adversary are
sk+1 = sk + f(sk, ak + a adv k )∆t
where aadvk is the adversary action at time k.
The notion here is that the adversary action is passed through the dynamics function and represents some additional set of dynamics. It is standard to clip actions within some boundary but for the above reason, we clip the agent and adversary actions separately. Otherwise, an agent would be able to limit the effect of the adversary by always taking actions at the bounds of its clipping range. The agent is clipped between [−1, 1] in the Hopper environment and the adversary is clipped between [−.25, .25]. The MDP through which we train the agent policy is characterized by the following states, actions, and rewards:
• sagentt = [ot, at] where ot is an observation returned by the environment, and at is the action taken by the agent.
• We use the standard rewards provided by the OpenAI Gym Mujoco environments at https: //github.com/openai/gym/tree/master/gym/envs/mujoco. For the exact functions, please refer to the code at ANONYMIZED. • aagentt ∈ [amin, amax] n.
The MDP for adversary i is the following:
• st = sagentt . The adversary sees the same states as the agent. • The adversary reward is the negative of the agent reward. • aadvt ∈ [ aadvmin, a adv max ]n .
For our domain randomization Hopper baseline, we use the following randomization: at each rollout, we scale the friction of all joints by a single value uniformly sampled from [0.7, 1.3]. We also randomly scale the mass of the ’torso’ link by a single value sampled from [0.7, 1.3]. For Half-Cheetah and Ant the range for friction is [0.1, 0.9] and for mass the range is [0.5, 1.5].
B INCREASING ADVERSARY POOL SIZE
We investigate whether RAP is robust to adversary number as this would be a useful property to minimize hyperparameter search. Here we hypothesize that while having more adversaries can represent a wider range of dynamics to learn to be robust to, we expect there to be diminishing returns due to the decreased batch size that each adversary receives (total number of environment steps is held constant across all training variations). We expect decreasing batch size to lead to worse agent policies since the batch will contain under-trained adversary policies. We cap the number of adversaries at eleven as our machines ran out of memory at this value. We run ten seeds for every adversary value and Fig. 5 shows the results for Hopper. Agent robustness on the test set increases monotonically up to three adversaries and roughly begins to decrease after that point. This suggests that a trade-off between adversary number and performance exists although we do not definitively show that diminishing batch sizes is the source of this trade-off. However, we observe in Fig. 6 that both three and five adversaries perform well across all studied Mujoco domains.
C HOLDOUT TESTS
In this section we describe in detail all of the holdout tests used.
C.1 HOPPER
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the hopper ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 1.3. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.7. The body geoms and their names are visible in Fig. 7.
The exact combinations and the corresponding test name are indicated in Table 2 for Hopper.
C.2 CHEETAH
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the
cheetah ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1. The body geoms and their names are visible in Fig. 8.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
C.3 ANT
We will use torso to indicate the head piece, leg to refer to one of the four legs that contact the ground, and ’aux’ to indicate the geom that connects the leg to the torso. Since the ant is symmetric we adopt a convention that two of the legs are front-left and front-right and two legs are back-left and back-right. Fig. 9 depicts the convention. For the Mujoco holdout transfer tests we pick a subset of the ant ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
D RESULTS
Here we recompute the values of all the results and display them with appropriate standard deviations in tabular form.
There was not space for the ant validation set results so they are reproduced here.
E CHALLENGES OF DOMAIN RANDOMIZATION
In our experiments, we find that naive parametrization of domain randomization can result in a brittle policy, even when evaluated on the same distribution it was trained on.
Effect of Domain Randomization Parametrization
From Fig. 6, we see that in the Ant and Hopper domains, the DR oracle achieves the highest transfer reward in the validation set as expected since the DR oracle is trained directly on the validation set. Interestingly, we found that the domain randomization policy performed much worse on the Half Cheetah environment, despite having access to the mass and friction coefficients during training. Looking at the performance for each mass and friction combination in Fig. 11, we found that the DR agent was able to perform much better at the low friction coefficients and learned to prioritize those values at the cost of significantly worse performance on average. This highlights a potential issue with domain randomization: while training across a wide variety of dynamics parameters can increase robustness, naive parametrizations can cause the policy to exploit subsets of the randomized domain and lead to a brittle policy. This is a problem inherent to the expectation across domains that is used in domain randomization; if some subset of randomizations have sufficiently high reward the agent will prioritize performance on those at the expense of robustness.
We hypothesize that this is due to the DR objective in Eq. 2 optimizing in expectation over the sampling range. To test this, we created a separate range of ‘good’ friction parameters [0.5, 1.5] and compared the robustness of a DR policy trained with ‘good‘ range against a DR policy trained with ‘bad’ range [0.1, 0.9] in Fig. 11. Here we see that a ‘good’ parametrization leads to the expected result where domain randomization is the most robust. We observe that domain randomization underperforms adversarial training on the validation set despite the validation set literally constituting the training set for domain randomization. This suggests that underlying optimization difficulties caused by significant variations in reward scaling are partially to blame for the poor performance of domain randomization. Notably, the adversary-based methods are not susceptible to the same parametrization issues.
Alternative DR policy architecture
As discussed above and also identified in Rajeswaran et al. (2016), the expectation across randomizations that is used in domain randomization causes it to prioritize a policy that performs well in a high-reward subset of the randomization domains. This is harmless when domain randomization is used for randomizations of state, such as color, where all the randomization environments have the same expected reward, but has more pernicious effects in dynamics randomizations. Consider a set of N randomization environments, N − 1 of which have reward Rlow and one of which has
has reward Rhigh where Rhigh >> Rlow. If the agent cannot identify which of the randomization environments it is in, the intuitively optimal solution is to pick the policy that optimizes the high reward environment. One possible way out of the quandary is to use an agent that has some memory, such as an LSTM-based policy, thus giving the possibility of identifying which environment the agent is in and deploying the appropriate response. However, if Rhigh is sufficiently large and there is some reduction in reward associated with performing the system-identification necessary to identify the randomization, then the agent will not perform the system identification and will prioritize achieving Rhigh. As an illustration of this challenge, Fig. 12 compares the results of domain randomization on the half-cheetah environment with and without memory. In the memory case, we use a 64 unit LSTM. As can be seen, there is an improvement in the ability of the domain randomized policy to perform well on the full range of low-friction / high mass values, but the improved performance does not extend to higher friction values. In fact, the performance contrast is enhanced even further as the policy does a good deal worse on the high friction values than the case without memory.
F ADDITIONAL EXPERIMENTS
Here we outline a few more experiments we ran that demonstrate the value of additional adversaries. We run the following tasks:
F.1 DEEPMIND CONTROL CATCH
This task uses the same Markov Decision Process described in Sec. A. The challenge (Tassa et al., 2020), pictured in Fig. 13, is to get the ball to fall inside the cup. As in the other continuous control
tasks, we apply the adversary to the actions of the agents (which is controlling the cup). We then test on variations of the mass of both the ball and the cup. The heatmaps for this task are presented in Fig. 14 where the 3 adversary case provides a slight improvement in the robustness region relative to the 1 adversary case.
F.2 MULTI-ARMED BERNOULLI BANDITS
As an illustrative example, we examine a multi-armed stochastic bandit, a problem widely studied in reinforcement learning literature. Generally, successful strategies for multi-arm bandit problems involve successfully balancing the exploration across arms and exploiting the ’best’ arm. A "robust" strategy should have vanishing regret as the time horizon goes to infinity. We construct a 10-armed bandit where each arm i is parametrized by a value p where p is the probability of that arm returning
1. The goal of the agent is to minimize total cumulative regret Rn over a horizon of n steps:
Rn = nmax i µi − E [ n∑ t=0 at ] where at corresponds to picking a particular arm. At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs padded with zeros to keep the length fixed. The adversary has a horizon of 1; at time-step zero it receives an observation of 0 and outputs the probability for each arm. At the termination of the horizon the adversary receives the negative of the cumulative agent reward. For our domain randomization baseline we use uniform sampling of the p value for each arm. We chose a horizon length of T = 100 steps. The MDP of the agent is characterized as follows:
• st = [ 0n∗(T−t)×1, rt, at, rt−1, at−1, , . . . , r0, a0 ] • rt = X(ai)−maxi µi • aagentt ∈ 0 . . . 9
At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs. The buffer matching the horizon length is padded with zeros. For each training step, the agent receives a reward of the negative expected regret. We set up the adversary problem as an MDP with a horizon of 1.
• st = [0.0] • r = − ∑T i=1 rt • aadv ∈ [0, 1]10
During adversarial training, we sample a random adversary at the beginning of each rollout, and allow it to pick 10 p values that are then shuffled randomly and then assigned to each arm (this is to prevent the agent from deterministically knowing which arm has which p value). The adversary is always given an observation of a vector of zeros and is rewarded once at the end of the rollout. We also construct a hold-out test of two bandit examples which we colloquially refer to as "evenly spread" and "one good arm." In "evenly spread", the arms, going from 1 to 10 have evenly spaced probabilities in steps of 0.1 0, 0.1, 0.2, 0.3, . . . 0.8, 0.9. In "one good arm" 9 arms have probability 0.1 and one arm has probability 0.9. As our policy for the agent, we use a Gated Recurrent Unit network with hidden size 256.
An interesting feature of the bandit task is that it makes clear that the single adversary approach corresponds to training on a single, adversarially constructed bandit instance. Surprisingly, as indicated in Fig. 15, this does not perform terribly on our two holdout tasks. However, there is a clear improvement on both tasks in the four adversary case. All adversarial approaches outperform an Upper Confidence Bound-based expert (shown in red). Interestingly, domain randomization, which had superficially good reward at training time, completely fails on the "one good arm" holdout task. This suggests another possible failure mode of domain randomization where in high dimensions uniform sampling may just fail to yield interesting training tasks. Finally, we note that since the upper confidence approach only tries to minimize regret asymptotically, our outperforming it may simply be due to our relatively short horizon; we simply provide it as a baseline.
G COST AND HYPERPARAMETERS
Here we reproduce the hyperparameters we used in each experiment and compute the expected runtime and cost of each experiment. Numbers indicated in {} were each used for one run. Otherwise the parameter was kept fixed at the indicated value.
G.1 HYPERPARAMETERS
For Mujoco the hyperparameters are:
• Learning rate:
– {.0003, .0005} for half cheetah – {.0005, .00005} for hopper and ant
• Generalized Advantage Estimation λ – {0.9, 0.95, 1.0} for half cheetah – {0.5, 0.9, 1.0} for hopper and ant
• Discount factor γ = 0.995 • Training batch size: 100000 • SGD minibatch size: 640 • Number of SGD steps per iteration: 10 • Number of iterations: 700 • We set the seed to 0 for all hyperparameter runs. • The maximum horizon is 1000 steps.
For the validation across seeds we used 10 seeds ranging from 0 to 9. All other hyperparameters are the default values in RLlib Liang et al. (2017) 0.8.0
G.2 COST
For all of our experiments we used AWS EC2 c4.8xlarge instances which come with 36 virtual CPUs. For the Mujoco experiments, we use 2 nodes and 11 CPUs per hyper-parameter, leading to one full hyper-parameter sweep fitting onto the 72 CPUs. We run the following set of experiments and ablations, each of which takes 8 hours.
• 0 adversaries • 1 adversary • 3 adversaries • 5 adversaries • Domain randomization
for a total of 5 experiments for each of Hopper, Cheetah, Ant. For the best hyperparameters and each experiment listed above we run a seed search with 6 CPUs used per-seed, a process which takes about 12 hours. This leads to a total of 2 ∗ 8 ∗ 5 ∗ 3 + 2 ∗ 12 ∗ 3 ∗ 5 = 600 node hours and 36 ∗ 600 ≈ 22000
CPU hours. At a cost of ≈ 0.3 dollars per node per hour for EC2 spot instances, this gives ≈ 180 dollars to fully reproduce our results for this experiment. If the chosen hyperparameters are used and only the seeds are sweep, this is ≈ 100 dollars.
G.3 RUN TIME AND SAMPLE COMPLEXITY
Here we briefly analyze the expected run-time of our algorithms. While there is an additional cost for adding a single adversary equal to the sum of the cost of computing gradients at train time and actions at run-time for an additional agent, there is no additional cost for adding additional adversaries. Since we divide the total set of samples per iteration amongst the adversaries, we compute approximately the same number of gradients and actions in the many-adversary case as we do in the single adversary case. In Fig. 16 plot of reward vs. wall-clock time supports this argument: the 0 adversary case runs the fastest but all the different adversary numbers complete 700 iterations of training in approximately the same amount of time. Additionally, Fig. 17 demonstrates that there is some variation in sample complexity but the trend is not consistent across adversary number.
G.4 CODE
Our code is available at ANONYMIZED. For our reinforcement learning code-base we used RLlib Liang et al. (2017) version 0.8.0 and did not make any custom modifications to the library.
H PURE NASH EQUILIBRIA DO NOT NECESSARILY EXIST
While there are canonical examples of games in which pure Nash equilibria do not exist such as rock-paper-scissors, we are not aware one for sequential games with continuous actions. Tessler et al. (2019) contains an example of a simple, horizon 1 MDP where duality is not satisfied. The pure minimax solution does not equal the value of the pure maximin solution and a greater value can be achieved by randomizing one of the policies showing that there is no pure equilibrium. | 1. What is the main contribution of the paper regarding training agents to be robust against adversarial policies?
2. What are the strengths of the proposed method, particularly in its formulation and experimental demonstration?
3. What are the weaknesses of the approach, especially when compared to other methods like domain randomization?
4. How many adversaries are needed to provide meaningful levels of robustness, and how should one decide on the number of adversaries to use?
5. Can the authors clarify the observation process for agents and adversaries in their trajectories?
6. Would considering more environments or tasks strengthen the claims of the paper?
7. Why did the authors choose to bold the most robust approach overall instead of the most robust approach they proposed?
8. Why are the rollouts of length M rather than T, as indicated in the formulation? | Review | Review
Summary
The authors present a scheme that can be used to train agents to be robust against a population of adversarial policies, in which adversaries can perturb actions via an additive perturbation. Motivated by the observation that agents trained against a single policy may overfit to that policy and hence will lack robustness to new/unseen policies, the authors seek to show that their method generalizes well to unseen policies at test time. Their experiments consider several simulated environments, in which they show generally good performance against several baselines.
Strengths
I find the argument that agents will overfit to a single policy convincing. While the motivating example WRT different forces acting on an agent may not be a scenario that robustness against adaptive perturbations can handle, the general claim seems to hold water. That is, it seems plausible that an agent might receive high reward in a zero sum min-max game subject to a single adversary simply because the adversary is not strong enough to limit the cumulative reward that agent receives.
The formulation for RAP is presented very clearly. It is quite helpful to first consider the single minmax adversary and domain randomization formulations, both of which seemed to have played roles in the development of RAP. Indeed, this seems a very natural way of formulating the problem. More generally, the paper is quite well written.
The experiments, and in particular Fig 2, clearly demonstrate the utility of this approach. One can see a notable difference when an agent is trained with respect to three adversarial policies vs. when it is only trained with a single adversarial policy.
Weaknesses
The utility of this approach is less clear when one considers Fig. 3. It seems that domain randomization often outperforms RAP. In my opinion, this weakens part of the claim made by the authors. However, it does seem true that given that DR is designed to perform well on new domains in expectation, it may be preferable to use RAP when one suffers worst case dynamics changes (e.g. Fig. 5).
Further study should be done as to how many adversaries one needs to provide meaningful levels of robustness. It seems that in different experiments, 1, 3, and 5 adversaries were considered. How should one decided how many adversaries to use?
I felt the paper was a bit unclear about what states are available to train adversarial and regular policies. That is, it is unclear whether in the trajectories
τ
j
and
τ
j
i
, the agents/adversaries observe their own actions, the actions of their counterpart (e.g. the agent observing the adversaries action), or both agent and adversary observing the perturbed action
a
+
α
a
¯
. The latter case would lead to a lack of observability for both agent and adversary. Perhaps the authors can clarify this in the rebuttal.
One weakness is that only one scenario (e.g. walking within these three environments) was considered. It seems that the claims of the paper could be strengthened if more environments/tasks were considered.
Further questions/clarifications
The reward function seems to be denoted as
R
,
R
, and
r
in various places.
The bolding in Table 1 is a bit confusing. I would be fairer to bold the most robust approach overall, rather than the most robust approach of the methods you propose.
In the notation of Section 4, why are the rollouts of length
M
rather than
T
, as indicated in the formulation?
Final Thoughts
Overall, I thought this was a solid and interesting paper. The motivation is compelling, the formulation is relatively clean, and the experiments generally back up the claims that are made by the authors. There are a few weaknesses, as I enumerated above, but even still I think that this is a valuable contribution. |
ICLR | Title
Robust Reinforcement Learning using Adversarial Populations
Abstract
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribution as the solution to a zero-sum minimax game. However, existing work on learning solutions to the Robust RL formulation has primarily focused on training a single RL agent against a single adversary. In this work, we demonstrate that using a single adversary does not consistently yield robustness to dynamics variations under standard parametrizations of the adversary; the resulting policy is highly exploitable by new adversaries. We propose a population-based augmentation to the Robust RL formulation in which we randomly initialize a population of adversaries and sample from the population uniformly during training. We empirically validate across robotics benchmarks that the use of an adversarial population results in a less exploitable, more robust policy. Finally, we demonstrate that this approach provides comparable robustness and generalization as domain randomization on these benchmarks while avoiding a ubiquitous domain randomization failure mode.
1 INTRODUCTION
Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering. The complexity of the physical world means that the models used to design controllers are often inaccurate. Optimization based control design approaches, such as reinforcement learning (RL), have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch. In this work, we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics.
An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics (Tessler et al., 2019; Kamalaruban et al., 2020; Pinto et al., 2017). If a global Nash equilibrium of this problem is found, then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations. Besides the benefit of removing user design once the perturbation mechanism is specified, this approach is maximally conservative, which is useful for safety critical applications.
However, the literature on learning an adversary predominantly uses a single, stochastic adversary. This raises a puzzling question: the zero-sum game does not necessarily have any pure Nash equilibria (see Appendix C in Tessler et al. (2019)) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria. That is, the most general form of the minimax problem searches over distributions of adversary and agent policies, however, this problem is approximated in the literature by a search for a single agent-adversary pair. We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy.
The following example provides some intuition for why using a single adversary can decrease robustness. Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south. For a fixed, deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state. Once the adversary is removed, the robot will still apply the compensatory forces and
possibly become unstable. Stochastic Gaussian policies (ubiquitous in continuous control) offer little improvement: they cannot represent multi-modal perturbations. Under these standard policy parametrizations, we cannot use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south. This leaves the agent exploitable to this class of perturbations.
The use of a single adversary in the robustness literature is in contrast to the multi-player game literature. In multi-player games, large sets of adversaries are used to ensure that an agent cannot easily be exploited (Vinyals et al., 2019; Czarnecki et al., 2020; Brown & Sandholm, 2019). Drawing inspiration from this literature, we introduce RAP (Robustness via Adversary Populations): a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent. Returning to our example of a robot perturbed by wind, if the robot learns to cancel the north wind effectively, then that opens a niche for an adversary to exploit by applying forces in another direction. With a population, we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over.
Our contributions are as follows:
• Using a set of continuous robotics control tasks, we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples.
• We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries.
• We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization.
2 RELATED WORK
This work builds upon robust control (Zhou & Doyle, 1998), a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics. The Robust Markov Decision Process (R-MDP) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small, tabular MDPs (Nilim & El Ghaoui, 2005; Lim et al., 2013). For larger or continuous MDPs, one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem (Tamar et al., 2014).
One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective. Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning (RARL)(Pinto et al., 2017) and Noisy Robust Markov Decision Processes (NR-MDP) (Tessler et al., 2019) which differ in how they parametrize the adversaries: RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action. Both of these works attempt to find an equilibrium of the minimax objective using a single adversary; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary.
A strong alternative to the minimax objective, domain randomization, asks a designer to explicitly define a distribution over environments that the agent should be robust to. For example, (Peng et al., 2018) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world; (Antonova et al., 2017) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot. Additionally, domain randomization has been successfully used to build accurate object detectors solely from simulated data (Tobin et al., 2017) and to zero-shot transfer a quadcopter flight policy from simulation (Sadeghi & Levine, 2016).
The use of population based training is a standard technique in multi-agent settings. Alphastar, the grandmaster-level Starcraft bot, uses a population of "exploiter" agents that fine-tune against the bot to prevent it from developing exploitable strategies (Vinyals et al., 2019). (Czarnecki et al., 2020) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy. They empirically demonstrate that learning in games can often fail to converge without populations. Finally, Active Domain Randomization (Mehta et al., 2019) is a very close approach to ours, as they use a population
of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions. However, they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward.
3 BACKGROUND
In this work we use the framework of a multi-agent, finite-horizon, discounted, Markov Decision Process (MDP) (Puterman, 1990) defined by a tuple 〈Aagent × Aadversary, S, T , r, γ〉. Here Aagent is the set of actions for the agent, Aadversary is the set of actions for the adversary, S is a set of states, T : Aagent × Aadversary × S → ∆(S) is a transition function, r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor. S is shared between the adversaries as they share a state-space with the agent. The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [∑T t=0 γ tr(st, at)|πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ(st, at−1). We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics (e.g. different values of friction, mass, wind, etc.) and the system dynamics for a given state and action as st+1 ∼ fξ(st, at).
3.1 BASELINES
Here we outline prior work and the approaches that will be compared with RAP. Our baselines consist of a single adversary and domain randomization.
3.1.1 SINGLE MINIMAX ADVERSARY
Our adversary formulation uses the Noisy Action Robust MDP (Tessler et al., 2019) in which the adversary adds its actions onto the agent actions. The objective is
max θ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ]
min φ E [ T∑ t=0 γtr(st, at + αāt)|πθ, π̄φ ] (1)
where α is a hyperparameter controlling the adversary strength. This is a game in which the adversary and agent play simultaneously. We note an important restriction inherent to this adversarial model. Since the adversary is only able to attack the agent through the actions, there is a restricted class of dynamical systems that it can represent; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in. This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g. perturbing the transition function directly.
3.1.2 DYNAMICS RANDOMIZATION
Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to. This allows the user to directly encode knowledge about the likely deviations between training and testing domains. For example, the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction; they then specify that the agent will be trained with a wide range of possible friction values. We use ξ to denote some vector that parametrizes the set of training environments (e.g. friction, masses, system dynamics, etc.). We denote the domain over which ξ is drawn from as Ξ and use P (Ξ) to denote
some probability distribution over ξ. The domain randomization objective is
max θ Eξ∼P(Ξ)
[ Est+1∼fξ(st,at) [ T∑ t=0 γtr(st, at)|πθ ]] st+1 ∼ fξ(st, at) at ∼ πθ(st)
(2)
Here the goal is to find an agent that performs well on average across the distribution of training environment. Most commonly, and in this work, the parameters ξ are sampled uniformly over Ξ.
4 RAP: ROBUSTNESS VIA ADVERSARY POPULATIONS
RAP extends the minimax objective with a population based approach. Instead of a single adversary, at each rollout we will sample uniformly from a population of adversaries. By using a population, the agent is forced to be robust to a wide variety of potential perturbations rather than a single perturbation. If the agent begins to overfit to any one adversary, this opens up a potential niche for another adversary to exploit. For problems with only one failure mode, we expect the adversaries to all come out identical to the minimax adversary, but as the number of failure modes increases the adversaries should begin to diversify to exploit the agent. To induce this diversity, we will rely on randomness in the gradient estimates and randomness in the initializations of the adversary networks rather than any explicit term that induces diversity.
Denoting π̄φi as the i-th adversary and i ∼ U(1, n) as the discrete uniform distribution defined on 1 through n, the objective becomes
max θ Ei∼U(1,n) [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ]
min φi E [ T∑ t=0 γtr(st, at, αā i t)|πθ, π̄φi ] ∀i = 1, . . . , n
st+1 ∼ f(st, at + αāt)
(3)
For a single adversary, this is equivalent to the minimax adversary described in Sec. 3.1.1. This is a game in which the adversary and agent play simultaneously.
We will optimize this objective by converting the problem into the equivalent zero-sum game. At the start of each rollout, we will sample an adversary index from the uniform distribution and collect a trajectory using the agent and the selected adversary. For notational simplicity, we assume the trajectory is of length T and that adversary i will participate in Ji total trajectories while, since the agent participates in every rollout, the agent will receive J total trajectories. We denote the j-th collected trajectory for the agent as τj = (s0, a0, r0, s1) × · · · × (sM , aM , rM , sM+1) and the associated trajectory for adversary i as τ ij = (s0, a0,−r0, s1) × · · · × (sM , aM ,−rM , sM ). Note that the adversary reward is simply the negative of the agent reward. We will use Proximal Policy Optimization (Schulman et al., 2017) (PPO) to update our policies. We caution that we have overloaded notation slightly here and for adversary i, τ ij=1:Ji refers only to the trajectories in which the adversary was selected: adversaries will only be updated using trajectories where they were active.
At the end of a training iteration, we update all our policies using gradient descent. The algorithm is summarized below:
Algorithm 1: Robustness via Adversary Populations Initialize θ, φ1 · · ·φn using Xavier initialization (Glorot & Bengio, 2010); while not converged do
for rollout j=1...J do sample adversary i ∼ U(1, n); run policies πθ, π̄φi in environment until termination; collect trajectories τj , τ ij end update θ, φ1 · · ·φn using PPO (Schulman et al., 2017) and trajectories τj for θ and τ ij for each φi;
end
5 EXPERIMENTS
In this section we present experiments on continuous control tasks from the OpenAI Gym Suite (Brockman et al., 2016; Todorov et al., 2012). We compare with the existing literature and evaluate the efficacy of a population of learned adversaries across a wide range of state and action space sizes. We investigate the following hypotheses:
H1. Agents are more likely to overfit to a single adversary than a population of adversaries, leaving them less robust on in-distribution tasks.
H2. Agents trained against a population of adversaries will generalize better, leading to improved performance on out-of-distribution tasks.
In-distribution tasks refer to the agent playing against perturbations that are in the training distribution: adversaries that add their actions onto the agent. However, the particular form of the adversary and their restricted perturbation magnitude means that there are many dynamical systems that they cannot represent (for example, significant variations of joint mass and friction). These tasks are denoted as out-of-distribution tasks. All of the tasks in the test set described in Sec. 5.1 are likely out-of-distribution tasks.
5.1 EXPERIMENTAL SETUP AND HYPERPARAMETER SELECTION
While we provide exact details of the hyperparameters in the Appendix, adversarial settings require additional complexity in hyperparameter selection. In the standard RL procedure, optimal hyperparameters are selected on the basis of maximum expected cumulative reward. However, if an agent playing against an adversary achieves a large cumulative reward, it is possible that the agent was simply playing against a weak adversary. Conversely, a low score does not necessarily indicate a strong adversary nor robustness: it could simply mean that we trained a weak agent.
To address this, we adopt a version of the train-validate-test split from supervised learning. We use the mean policy performance on a suite of validation tasks to select the hyperparameters, then we train the policy across ten seeds and report the resultant mean and standard deviation over twenty trajectories. Finally, we evaluate the seeds on a holdout test set of eight additional model-mismatch tasks. These tasks vary significantly in difficulty; for visual clarity we report the average across tasks in this paper and report the full breakdown across tasks in the Appendix.
We experiment with the Hopper, Ant, and Half Cheetah continuous control environments used in the original RARL paper Pinto et al. (2017); these are shown in Fig. 1. To generate the validation model mismatch, we pre-define ranges of mass and friction coefficients as follows: for Hopper, mass ∈ [0.7, 1.3] and friction ∈ [0.7, 1.3]; Half Cheetah and Ant, mass ∈ [0.5, 1.5] and friction ∈ [0.1, 0.9]. We scale the friction of every Mujoco geom and the mass of the torso with the same (respective) coefficients. We compare the robustness of agents trained via RAP against: 1) agents trained against a single adversary in a zero-sum game, 2) oracle agents trained using domain randomization, and 3) an agent trained only using PPO and no perturbation mechanism. To train the domain randomization
oracle, at each rollout we uniformly sample a friction and mass coefficient from the validation set ranges. We then scale the friction of all geoms and the mass of the torso by their respective coefficients; this constitutes directly training on the validation set. To generate the test set of model mismatch, we take both the highest and lowest friction coefficients from the validation range and apply them to different combinations of individual geoms. For the exact selected combinations, please refer to the Appendix.
As further validation of the benefits of RAP, we include an additional set of experiments on a continuous control task, a gridworld maze search task, and a Bernoulli Bandit task in Appendix Sec. F. Finally, we note that both our agent and adversary networks are two layer-neural networks with 64 hidden units in each layer and a tanh nonlinearity.
6 RESULTS
H1. In-Distribution Tasks: Analysis of Overfitting A globally minimax optimal adversary should be unexploitable and perform equally well against any adversary of equal strength. We investigate the optimality of our policy by asking whether the minimax agent is robust to swaps of adversaries from different training runs, i.e. different seeds. Fig. 2 shows the result of these swaps for the one adversary and three adversary case. The diagonal corresponds to playing against the adversaries the agent was trained with while every other square corresponds to playing against adversaries from a different seed. To simplify presentation, in the three adversary case, each square is the average performance against all the adversaries from that seed. We observe that the agent trained against three adversaries (top row right) is robust under swaps while the single adversary case is not (top row left). The agent trained against a single adversary is highly exploitable, as can be seen by its extremely sub-par performance against an adversary from any other seed. Since the adversaries off-diagonal are feasible adversaries, this suggests that we have found a poor local optimum of the objective.
In contrast, the three adversary case is generally robust regardless of which adversary it plays against, suggesting that the use of additional adversaries has made the agent more robust. One possible hypothesis for why this could be occurring is that the adversaries in the "3 adversary" case are somehow weaker than the adversaries in the "1 adversary" case. The middle row of the figure shows that it is not the case that the improved performance of the agent playing against the three adversaries is due to some weakness of the adversaries. If anything, the adversaries from the three adversary case are stronger as the agent trained against 1 adversary does extremely poorly playing against the three adversaries (left) whereas the agent trained against three adversaries still performs well when playing against the adversaries from the single-adversary runs. Finally, the bottom row investigates how an agent trained with domain randomization fairs against adversaries from either training regimes. In neither case is the domain randomization agent robust on these tasks.
H2. Out-of-Distribution Tasks: Robustness and Generalization of Population Training
Here we present the results from the validation and holdout test sets described in Section 5.1. We compare the performance of training with adversary populations of size three and five against vanilla PPO, the domain randomization oracle, and the single minimax adversary. We refer to domain randomization as an oracle as it is trained directly on the test distribution.
Fig.6 shows the average reward (the average of ten seeds across the validation or test sets respectively) for each environment. Table 1 gives the corresponding numerical values and the percent change of each policy from the baseline. Standard deviations are omitted on the test set due to wide variation in task difficulty; the individual tests that we aggregate here are reported in the Appendix with
appropriate error bars. In all environments we achieve a higher reward across both the validation and holdout test set using RAP of size three and/or five when compared to the single minimax adversary case. These results from testing on new environments with altered dynamics supports hypothesis H2. that training with a population of adversaries leads to more robust policies than training with a single adversary in out-of-distribution tasks. Furthermore, while the performance is only comparable with the domain randomization oracle, the adversarial approach does not require prior engineering of appropriate randomizations. Furthermore, despite domain randomization being trained directly on these out-of-distribution tasks, domain randomization can have serious failure modes of domain randomization due to its formulation. A detailed analysis of this can be found in Appendix E.
For a more detailed comparison of robustness across the validation set, Fig. 4 shows heatmaps of the performance across all the mass, friction coefficient combinations. Here we highlight the heatmaps for Hopper and Half Cheetah for vanilla PPO, domain randomization oracle, single adversary, and best adversary population size. Additional heatmaps for other adversary population sizes and the Ant environment can be found in the Appendix. Note that Fig. 4 is an example of a case where a single adversary has negligible effect on or slightly reduces the performance of the resultant policy on the
validation set. This supports our hypothesis that a single adversary can actually lower the robustness of an agent.
7 CONCLUSIONS AND FUTURE WORK
In this work we demonstrate that the use of a single adversary to approximate the solution to a minimax problem does not consistently lead to improved robustness. We propose a solution through the use of multiple adversaries (RAP), and demonstrate that this provides robustness across a variety of robotics benchmarks. We also compare RAP with domain randomization and demonstrate that while DR can lead to a more robust policy, it requires careful parametrization of the domain we sample from to ensure robustness. RAP does not require this tuning, allowing for use in domains where appropriate tuning requires extensive prior knowledge or expertise.
There are several open questions stemming from this work. While we empirically demonstrate the effects of RAP, we do not have a compelling theoretical understanding of why multiple adversaries are helping. Perhaps RAP helps approximate a mixed Nash equilibrium as discussed in Sec. 1 or
perhaps population based training increases the likelihood that one of the adversaries is strong? Would the benefits of RAP disappear if a single adversary had the ability to represent mixed Nash?
There are some extensions of this work that we would like to pursue. We have looked at the robustness of our approach in simulated settings; future work will examine whether this robustness transfers to real-world settings. Additionally, our agents are currently memory-less and therefore cannot perform adversary identification; perhaps memory leads to a system-identification procedure that improves transfer performance. Our adversaries can also be viewed as forming a task distribution, allowing them to be used in continual learning approaches like MAML (Nagabandi et al., 2018) where domain randomization is frequently used to construct task distributions.
A FULL DESCRIPTION OF THE CONTINUOUS CONTROL MDPS
We use the Mujoco ant, cheetah, and hopper environments as a test of the efficacy of our strategy versus the 0 adversary, 1 adversary, and domain randomization baselines. We use the Noisy Action Robust MDP formulation Tessler et al. (2019) for our adversary parametrization. If the normal system dynamics are
sk+1 = sk + f(sk, ak)∆t
the system dynamics under the adversary are
sk+1 = sk + f(sk, ak + a adv k )∆t
where aadvk is the adversary action at time k.
The notion here is that the adversary action is passed through the dynamics function and represents some additional set of dynamics. It is standard to clip actions within some boundary but for the above reason, we clip the agent and adversary actions separately. Otherwise, an agent would be able to limit the effect of the adversary by always taking actions at the bounds of its clipping range. The agent is clipped between [−1, 1] in the Hopper environment and the adversary is clipped between [−.25, .25]. The MDP through which we train the agent policy is characterized by the following states, actions, and rewards:
• sagentt = [ot, at] where ot is an observation returned by the environment, and at is the action taken by the agent.
• We use the standard rewards provided by the OpenAI Gym Mujoco environments at https: //github.com/openai/gym/tree/master/gym/envs/mujoco. For the exact functions, please refer to the code at ANONYMIZED. • aagentt ∈ [amin, amax] n.
The MDP for adversary i is the following:
• st = sagentt . The adversary sees the same states as the agent. • The adversary reward is the negative of the agent reward. • aadvt ∈ [ aadvmin, a adv max ]n .
For our domain randomization Hopper baseline, we use the following randomization: at each rollout, we scale the friction of all joints by a single value uniformly sampled from [0.7, 1.3]. We also randomly scale the mass of the ’torso’ link by a single value sampled from [0.7, 1.3]. For Half-Cheetah and Ant the range for friction is [0.1, 0.9] and for mass the range is [0.5, 1.5].
B INCREASING ADVERSARY POOL SIZE
We investigate whether RAP is robust to adversary number as this would be a useful property to minimize hyperparameter search. Here we hypothesize that while having more adversaries can represent a wider range of dynamics to learn to be robust to, we expect there to be diminishing returns due to the decreased batch size that each adversary receives (total number of environment steps is held constant across all training variations). We expect decreasing batch size to lead to worse agent policies since the batch will contain under-trained adversary policies. We cap the number of adversaries at eleven as our machines ran out of memory at this value. We run ten seeds for every adversary value and Fig. 5 shows the results for Hopper. Agent robustness on the test set increases monotonically up to three adversaries and roughly begins to decrease after that point. This suggests that a trade-off between adversary number and performance exists although we do not definitively show that diminishing batch sizes is the source of this trade-off. However, we observe in Fig. 6 that both three and five adversaries perform well across all studied Mujoco domains.
C HOLDOUT TESTS
In this section we describe in detail all of the holdout tests used.
C.1 HOPPER
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the hopper ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 1.3. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.7. The body geoms and their names are visible in Fig. 7.
The exact combinations and the corresponding test name are indicated in Table 2 for Hopper.
C.2 CHEETAH
The Mujoco geom properties that we modified are attached to a particular body and determine its appearance and collision properties. For the Mujoco holdout transfer tests we pick a subset of the
cheetah ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1. The body geoms and their names are visible in Fig. 8.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
C.3 ANT
We will use torso to indicate the head piece, leg to refer to one of the four legs that contact the ground, and ’aux’ to indicate the geom that connects the leg to the torso. Since the ant is symmetric we adopt a convention that two of the legs are front-left and front-right and two legs are back-left and back-right. Fig. 9 depicts the convention. For the Mujoco holdout transfer tests we pick a subset of the ant ‘geom’ elements and scale the contact friction values by maximum friction coefficient, 0.9. Likewise, for the rest of the ‘geom’ elements, we scale the contact friction by the minimum value of 0.1.
The exact combinations and the corresponding test name are indicated in Table 4 for Hopper.
D RESULTS
Here we recompute the values of all the results and display them with appropriate standard deviations in tabular form.
There was not space for the ant validation set results so they are reproduced here.
E CHALLENGES OF DOMAIN RANDOMIZATION
In our experiments, we find that naive parametrization of domain randomization can result in a brittle policy, even when evaluated on the same distribution it was trained on.
Effect of Domain Randomization Parametrization
From Fig. 6, we see that in the Ant and Hopper domains, the DR oracle achieves the highest transfer reward in the validation set as expected since the DR oracle is trained directly on the validation set. Interestingly, we found that the domain randomization policy performed much worse on the Half Cheetah environment, despite having access to the mass and friction coefficients during training. Looking at the performance for each mass and friction combination in Fig. 11, we found that the DR agent was able to perform much better at the low friction coefficients and learned to prioritize those values at the cost of significantly worse performance on average. This highlights a potential issue with domain randomization: while training across a wide variety of dynamics parameters can increase robustness, naive parametrizations can cause the policy to exploit subsets of the randomized domain and lead to a brittle policy. This is a problem inherent to the expectation across domains that is used in domain randomization; if some subset of randomizations have sufficiently high reward the agent will prioritize performance on those at the expense of robustness.
We hypothesize that this is due to the DR objective in Eq. 2 optimizing in expectation over the sampling range. To test this, we created a separate range of ‘good’ friction parameters [0.5, 1.5] and compared the robustness of a DR policy trained with ‘good‘ range against a DR policy trained with ‘bad’ range [0.1, 0.9] in Fig. 11. Here we see that a ‘good’ parametrization leads to the expected result where domain randomization is the most robust. We observe that domain randomization underperforms adversarial training on the validation set despite the validation set literally constituting the training set for domain randomization. This suggests that underlying optimization difficulties caused by significant variations in reward scaling are partially to blame for the poor performance of domain randomization. Notably, the adversary-based methods are not susceptible to the same parametrization issues.
Alternative DR policy architecture
As discussed above and also identified in Rajeswaran et al. (2016), the expectation across randomizations that is used in domain randomization causes it to prioritize a policy that performs well in a high-reward subset of the randomization domains. This is harmless when domain randomization is used for randomizations of state, such as color, where all the randomization environments have the same expected reward, but has more pernicious effects in dynamics randomizations. Consider a set of N randomization environments, N − 1 of which have reward Rlow and one of which has
has reward Rhigh where Rhigh >> Rlow. If the agent cannot identify which of the randomization environments it is in, the intuitively optimal solution is to pick the policy that optimizes the high reward environment. One possible way out of the quandary is to use an agent that has some memory, such as an LSTM-based policy, thus giving the possibility of identifying which environment the agent is in and deploying the appropriate response. However, if Rhigh is sufficiently large and there is some reduction in reward associated with performing the system-identification necessary to identify the randomization, then the agent will not perform the system identification and will prioritize achieving Rhigh. As an illustration of this challenge, Fig. 12 compares the results of domain randomization on the half-cheetah environment with and without memory. In the memory case, we use a 64 unit LSTM. As can be seen, there is an improvement in the ability of the domain randomized policy to perform well on the full range of low-friction / high mass values, but the improved performance does not extend to higher friction values. In fact, the performance contrast is enhanced even further as the policy does a good deal worse on the high friction values than the case without memory.
F ADDITIONAL EXPERIMENTS
Here we outline a few more experiments we ran that demonstrate the value of additional adversaries. We run the following tasks:
F.1 DEEPMIND CONTROL CATCH
This task uses the same Markov Decision Process described in Sec. A. The challenge (Tassa et al., 2020), pictured in Fig. 13, is to get the ball to fall inside the cup. As in the other continuous control
tasks, we apply the adversary to the actions of the agents (which is controlling the cup). We then test on variations of the mass of both the ball and the cup. The heatmaps for this task are presented in Fig. 14 where the 3 adversary case provides a slight improvement in the robustness region relative to the 1 adversary case.
F.2 MULTI-ARMED BERNOULLI BANDITS
As an illustrative example, we examine a multi-armed stochastic bandit, a problem widely studied in reinforcement learning literature. Generally, successful strategies for multi-arm bandit problems involve successfully balancing the exploration across arms and exploiting the ’best’ arm. A "robust" strategy should have vanishing regret as the time horizon goes to infinity. We construct a 10-armed bandit where each arm i is parametrized by a value p where p is the probability of that arm returning
1. The goal of the agent is to minimize total cumulative regret Rn over a horizon of n steps:
Rn = nmax i µi − E [ n∑ t=0 at ] where at corresponds to picking a particular arm. At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs padded with zeros to keep the length fixed. The adversary has a horizon of 1; at time-step zero it receives an observation of 0 and outputs the probability for each arm. At the termination of the horizon the adversary receives the negative of the cumulative agent reward. For our domain randomization baseline we use uniform sampling of the p value for each arm. We chose a horizon length of T = 100 steps. The MDP of the agent is characterized as follows:
• st = [ 0n∗(T−t)×1, rt, at, rt−1, at−1, , . . . , r0, a0 ] • rt = X(ai)−maxi µi • aagentt ∈ 0 . . . 9
At each step, the agent is given an observation buffer of stacked frames consisting of all previous (action, reward) pairs. The buffer matching the horizon length is padded with zeros. For each training step, the agent receives a reward of the negative expected regret. We set up the adversary problem as an MDP with a horizon of 1.
• st = [0.0] • r = − ∑T i=1 rt • aadv ∈ [0, 1]10
During adversarial training, we sample a random adversary at the beginning of each rollout, and allow it to pick 10 p values that are then shuffled randomly and then assigned to each arm (this is to prevent the agent from deterministically knowing which arm has which p value). The adversary is always given an observation of a vector of zeros and is rewarded once at the end of the rollout. We also construct a hold-out test of two bandit examples which we colloquially refer to as "evenly spread" and "one good arm." In "evenly spread", the arms, going from 1 to 10 have evenly spaced probabilities in steps of 0.1 0, 0.1, 0.2, 0.3, . . . 0.8, 0.9. In "one good arm" 9 arms have probability 0.1 and one arm has probability 0.9. As our policy for the agent, we use a Gated Recurrent Unit network with hidden size 256.
An interesting feature of the bandit task is that it makes clear that the single adversary approach corresponds to training on a single, adversarially constructed bandit instance. Surprisingly, as indicated in Fig. 15, this does not perform terribly on our two holdout tasks. However, there is a clear improvement on both tasks in the four adversary case. All adversarial approaches outperform an Upper Confidence Bound-based expert (shown in red). Interestingly, domain randomization, which had superficially good reward at training time, completely fails on the "one good arm" holdout task. This suggests another possible failure mode of domain randomization where in high dimensions uniform sampling may just fail to yield interesting training tasks. Finally, we note that since the upper confidence approach only tries to minimize regret asymptotically, our outperforming it may simply be due to our relatively short horizon; we simply provide it as a baseline.
G COST AND HYPERPARAMETERS
Here we reproduce the hyperparameters we used in each experiment and compute the expected runtime and cost of each experiment. Numbers indicated in {} were each used for one run. Otherwise the parameter was kept fixed at the indicated value.
G.1 HYPERPARAMETERS
For Mujoco the hyperparameters are:
• Learning rate:
– {.0003, .0005} for half cheetah – {.0005, .00005} for hopper and ant
• Generalized Advantage Estimation λ – {0.9, 0.95, 1.0} for half cheetah – {0.5, 0.9, 1.0} for hopper and ant
• Discount factor γ = 0.995 • Training batch size: 100000 • SGD minibatch size: 640 • Number of SGD steps per iteration: 10 • Number of iterations: 700 • We set the seed to 0 for all hyperparameter runs. • The maximum horizon is 1000 steps.
For the validation across seeds we used 10 seeds ranging from 0 to 9. All other hyperparameters are the default values in RLlib Liang et al. (2017) 0.8.0
G.2 COST
For all of our experiments we used AWS EC2 c4.8xlarge instances which come with 36 virtual CPUs. For the Mujoco experiments, we use 2 nodes and 11 CPUs per hyper-parameter, leading to one full hyper-parameter sweep fitting onto the 72 CPUs. We run the following set of experiments and ablations, each of which takes 8 hours.
• 0 adversaries • 1 adversary • 3 adversaries • 5 adversaries • Domain randomization
for a total of 5 experiments for each of Hopper, Cheetah, Ant. For the best hyperparameters and each experiment listed above we run a seed search with 6 CPUs used per-seed, a process which takes about 12 hours. This leads to a total of 2 ∗ 8 ∗ 5 ∗ 3 + 2 ∗ 12 ∗ 3 ∗ 5 = 600 node hours and 36 ∗ 600 ≈ 22000
CPU hours. At a cost of ≈ 0.3 dollars per node per hour for EC2 spot instances, this gives ≈ 180 dollars to fully reproduce our results for this experiment. If the chosen hyperparameters are used and only the seeds are sweep, this is ≈ 100 dollars.
G.3 RUN TIME AND SAMPLE COMPLEXITY
Here we briefly analyze the expected run-time of our algorithms. While there is an additional cost for adding a single adversary equal to the sum of the cost of computing gradients at train time and actions at run-time for an additional agent, there is no additional cost for adding additional adversaries. Since we divide the total set of samples per iteration amongst the adversaries, we compute approximately the same number of gradients and actions in the many-adversary case as we do in the single adversary case. In Fig. 16 plot of reward vs. wall-clock time supports this argument: the 0 adversary case runs the fastest but all the different adversary numbers complete 700 iterations of training in approximately the same amount of time. Additionally, Fig. 17 demonstrates that there is some variation in sample complexity but the trend is not consistent across adversary number.
G.4 CODE
Our code is available at ANONYMIZED. For our reinforcement learning code-base we used RLlib Liang et al. (2017) version 0.8.0 and did not make any custom modifications to the library.
H PURE NASH EQUILIBRIA DO NOT NECESSARILY EXIST
While there are canonical examples of games in which pure Nash equilibria do not exist such as rock-paper-scissors, we are not aware one for sequential games with continuous actions. Tessler et al. (2019) contains an example of a simple, horizon 1 MDP where duality is not satisfied. The pure minimax solution does not equal the value of the pure maximin solution and a greater value can be achieved by randomizing one of the policies showing that there is no pure equilibrium. | 1. What is the main contribution of the paper in improving robustness in reinforcement learning?
2. What are the strengths of the proposed approach, particularly in comparison to previous works?
3. What are the weaknesses of the paper, especially regarding experimental evaluation and computational overhead?
4. How does the reviewer assess the clarity and novelty of the paper's content?
5. Are there any suggestions for improving the method, such as building it upon off-policy algorithms or comparing it with a naive extension of the single adversary case? | Review | Review
Summary: This paper proposes to improve robustness in reinforcement learning via a population of diverse adversaries, where previous works mainly focus on the use a single adversary to mitigate the problem that the trained policy could be highly exploitable by the adversary. Specifically, at each iteration, it randomly selects an adversary from the population for rollouts, and it is trained by PPO. Experiments are conducted on 3 MuJoCo environments in comparison with vanilla PPO, domain randomization.
Strong points: Using a population of adversaries to improve robustness in RL is interesting. The idea is simple, and the writing is clear.
Concerns: My major concern is in the experimental evaluation. a. Results are shown using final performance. I am curious about the learning curve – how does the method compare against other baselines in terms of sample efficiency? A side-effect using a population is that RAP needs to update n adversaries at each training iteration compared with using a single adversary, and will incur more computation overhead. Could authors fairly compare with other baselines in terms of this and show the learning curve?
b. How MUCH LONGER does it take to run RAP compared with other baselines? How much more memory does it take to use n adversaries compared with a single adversary?
c. Could authors compare with a naive extension of the single adversary case in which the single adversary sample n actions? Is the baseline comparable with RAP using n adversaries?
d. I am confused why RAP is built upon an on-policy algorithm. A number of works using population-based methods are built upon off-policy algorithms as agents in the population can share the samples and could be beneficial. Could authors build the method upon off-policy algorithms to further improve the applicability of RAP?
e. For Figure 3, the performance gain over using a single adversary is not significant on HalfCheetah and Ant, and the results is not convincing enough to support the claim.
As the paper uses the population-based methods, it is also worth discussing its relation with Khadka et al. 2018, etc. |
ICLR | Title
Intention Propagation for Multi-agent Reinforcement Learning
Abstract
A hallmark of an AI agent is to mimic human beings to understand and interact with others. In this paper, we propose a collaborative multi-agent reinforcement learning algorithm to learn a joint policy through the interactions over agents. To make a joint decision over the group, each agent makes an initial decision and tells its policy to its neighbors. Then each agent modifies its own policy properly based on received messages and spreads out its plan. As this intention propagation procedure goes on, we prove that it converges to a mean-field approximation of the joint policy with the framework of neural embedded probabilistic inference. We evaluate our algorithm on several large scale challenging tasks and demonstrate that it outperforms previous state-of-the-arts.
1 INTRODUCTION
Collaborative multi-agent reinforcement learning is an important sub-field of the multi-agent reinforcement learning (MARL), where the agents learn to coordinate to achieve joint success. It has wide applications in traffic control (Kuyer et al., 2008), autonomous driving (Shalev-Shwartz et al., 2016) and smart grid (Yang et al., 2018). To learn a coordination, the interactions between agents are indispensable. For instance, humans can reason about other’s behaviors or know other peoples’ intentions through communication and then determine an effective coordination plan. However, how to design a mechanism of such interaction in a principled way and at the same time solve the large scale real-world applications is still a challenging problem.
Recently, there is a surge of interest in solving the collaborative MARL problem (Foerster et al., 2018; Qu et al., 2019; Lowe et al., 2017). Among them, joint policy approaches have demonstrated their superiority (Rashid et al., 2018; Sunehag et al., 2018; Oliehoek et al., 2016). A straightforward approach is to replace the action in the single-agent reinforcement learning by the joint action a = (a1, a2, ..., aN ), while it obviously suffers from the issue of the exponentially large action space. Thus several approaches have been proposed to factorize the joint action space to mitigate such issue, which can be roughly grouped into two categories:
• Factorization on policy. This approach explicitly assumes that π(a|s) := ∏N i=1 πi(ai|s), i.e.,
policies are independent (Foerster et al., 2018; Zhang et al., 2018). To mitigate for the instability issue caused by the independent learner, it generally needs a centralized critic. • Factorization on value function. This approach has a similar spirit but factorizes the joint value function into several utility functions, each just involving the actions of one agent (Rashid et al., 2018; Sunehag et al., 2018).
However, these two approaches lack of the interactions between agents, since in their algorithms agent i does not care about the plan of agent j. Indeed, they may suffer from a phenomenon called relative over-generalization in game theory observed by Wei & Luke (2016); Castellini et al. (2019); Palmer et al. (2018). Approaches based on the coordinate graph would effectively prevent such cases, where the value function is factorized as a summation of utility function on pairwise or local joint action (Guestrin et al., 2002; Böhmer et al., 2020). However, they only can be applied in discrete action, small scale game.
Furthermore, despite the empirical success of the aforementioned work in certain scenarios, it still lacks theoretical insight. In this work, we only make a simple yet realistic assumption: the reward function ri of each agent i just depends on its individual action and the actions of its neighbors (and
state), i.e., ri(s,a) = ri(s, ai, aNi), (1)
where we use Ni to denote the neighbors of agent i, s to denote the global state. It says the goal or decision of agent is explicitly influenced by a small subset Ni of other agents. Note that such an assumption is reasonable in lots of real scenarios. For instance,
• The traffic light at an intersection makes the decision on the phase changing mainly relying on the traffic flow around it and the policies of its neighboring traffic light. • The main goal of a defender in a soccer game is to tackle the opponent’s attacker, while he rarely needs to pay attention to opponent goalkeeper’s strategy.
Based on the assumption in equation 1, we propose a principled multi-agent reinforcement learning algorithm in the framework of probabilistic inference, where the objective is to maximize the long term reward of the group, i.e., ∑∞ t=0 ∑N i=1 γ trti ( see details in section 4).
Note since each agent’s reward depends on its neighbor, we still need a joint policy to maximize the global reward through interactions. In this paper, we derive an iterative procedure for such interaction to learn the joint policy in collaborative MARL and name it intention propagation. Particularly,
• In the first round, each agent i makes an independent decision and spreads out his plan µ̃i(we name it intention) to neighbors. • In the second round, agents i changes its initial intention properly based on its neighbors’ intention µ̃j , j ∈ Ni and propagates its intention µ̃i again. • In the third round, it changes the decision in the second round with a similar argument. • As this procedure goes on, we show the final output of agents’ policy converges to the mean field
approximation (the variational inference method from the probabilistic graphical model (Bishop, 2006)) of the joint policy.
In addition, this joint policy has the form of Markov Random Field induced by the locality of the reward function (proposition 1). Therefore, such a procedure is computationally efficient when the underlying graph is sparse, since in each round, each agent just needs to care about what its neighbors intend to do. Remark: (1) Our work is not related to the mean-field game (MFG) (Yang et al., 2018). The goal of the MFG is to find the Nash equilibrium, while our work aims to the optimal joint policy in the collaborative game. Furthermore, MFG generally assumes agents are identical and interchangeable. When the number of agents goes to infinity, MFG can view the state of other agents as a population state distribution. In our problem, we do not have such assumptions.
(2) our analysis is not limited to the mean-field approximation. When we change the message passing structure of intention propagation, we can show that it converges to other approximation of the joint policy, e.g., loopy belief propagation in variational inference (Yedidia et al., 2001) (see Appendix B.2 ).
Contributions: (1) We propose a principled method named intention propagation to solve the joint policy collaborative MARL problem; (2) Our method is computationally efficient, which can scale up to one thousand agents and thus meets the requirement of real applications; (3) Empirically, it outperforms state-of-the-art baselines with a wide margin when the number of agents is large; (4) Our work builds a bridge between MARL and neural embedded probabilistic inference, which would lead to new algorithms beyond intention propagation.
Notation: sti and ati represent the state and action of agent i at time step t. The neighbors of agent i are represented asNi. We denote X as a random variable with domain X and refer to instantiations of X by the lower case character x. We denote a density on X by p(x) and denote the space of all such densities as by P .
2 RELATED WORK
We first discuss the work of the factorized approaches on the joint policy. COMA designs a MARL algorithm based on the actor-critic framework with independent actors πi(ai|s), where the joint policy is factorized as π(a|s) = ∏N i=1 πi(ai|s) (Foerster et al., 2018). MADDPG considers a MARL with the cooperative or competitive setting, where it creates a critic for each agent (Lowe et al., 2017). Other similar works may include (de Witt et al., 2019; Wei et al., 2018). Another way is to factorize the value functions into several utility functions. Sunehag et al. (2018) assumes that the
overall Q function can be factorized as Q(s, a1, a2, .., aN ) = ∑N i=1Qi(si, ai) . QMIX extends this work to include a richer class of function, where it assumes the overall Q function is a monotonic function w.r.t. each Qi(si, ai) (Rashid et al., 2018). Similarly, Son et al. (2019) further relax the structure constraint on the joint value function. However these factorized methods suffer from the relative overgeneralization issue (Castellini et al., 2019; Palmer et al., 2018). Generally speaking, it pushes the agents to underestimate a certain action because of the low rewards they receive, while they could get a higher one by perfectly coordinating.
A middle ground between the (fully) joint policy and the factorized policy is the coordination graph (Guestrin et al., 2002), where the value function is factorized as a summation of the utility function on the pairwise action. Böhmer et al. (2020); Castellini et al. (2019) combine deep learning techniques with the coordination graph. It addresses the issue of relative overgeneralization, but still has two limitations especially in the large scale MARL problem. (1) The max-sum algorithm can just be implemented in the discrete action space since it needs a max-sum operation on the action of Q function. (2) Even in the discrete action case, each step of the Q learning has to do several loops of max-sum operation over the whole graph if there is a cycle in graph. Our algorithm can handle both discrete and continuous action space cases and alleviate the scalability issue by designing an intention propagation network.
Another category of MARL is to consider the communication among agents. The attention mechanism is used to decide when and who to communicate with (Das et al., 2018). Foerster et al. (2016) propose an end-to-end method to learn communication protocol. In (Liu et al., 2019; Chu et al., 2020), each agent sends the action information to it neighbors. In addition, Chu et al. (2020) require a strong assumption that the MDP has the spatial-temporal Markov property. However, they utilizes neighbor’s action information in a heuristic way and thus it is unclear what the agents are learning (e.g., do they learn the optimal joint policy to maximize the group reward? ). Jiang et al. (2020) propose DGN which uses GNN to spread the state embedding information to neighbors. However each agent still uses an independent Q learning to learn the policy and neglects other agents’ plans. In contrast, we propose a principled algorithm, where each agent makes decision considering other agents’ plan. Such procedure can be parameterized by GNN and other neural networks (see section 4.1 and appendix B.2). We prove its convergence to the solution of variational inference methods.
3 BACKGROUNDS
Probabilistic Reinforcement Learning: Probabilistic reinforcement learning (PRL) (Levine, 2018) is our building block. PRL defines the trajectory τ up to time step T as τ = [s0, a0, s1, a1, ..., sT , aT , sT+1]. The probability distribution of the trajectory τ induced by the optimal policy is defined as p(τ) = [p(s0) ∏T t=0 p(s t+1|st, at)] exp (∑T t=0 r(s t, at) ) . While the probability of the trajectory τ under the policy π(a|s) is defined as p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st, at)π(at|st). The objective is to minimize the KL divergence between p̂(τ) and p(τ). It is equivalent to the maximum entropy reinforcement learning
max π J(π) = T∑ t=0 E[r(st, at) +H(π(at|st))],
where it omits the discount factor γ and regularizer factor α of the entropy term, since it is easy to incorporate them into the transition and reward respectively. Notice in this framework the max operator in the Bellman optimality equation would be replaced by the softmax operator and thus its optimal policy is a softmax function related to the Q function (Haarnoja et al., 2017). Such framework subsumes state-of-the-art algorithms such as soft-actor-critic (SAC) (Haarnoja et al., 2018). In each iteration, SAC optimizes the following loss function of Q,π, V , and respectively.
E(st,at)∼D [ Q(st, at)− r(st, at)− γEst+1∼p[V (st+1)] ]2 ,Est∼DEat∼π[log π(at|st)−Q(st, at)]
Est∼D [ V (st)− Eat∼πθ [Q(st, at)− log π(at|st)] ]2 ,where D is the replay buffer.
Function Space Embedding of Distribution: In our work, we use the tool of embedding in Reproducing Kernel Hilbert Space (RKHS) to design an intention propagation procedure (Smola et al., 2007). We let φ(X) be an implicit feature mapping and X be a random variable with distribution p(x). Embeddings of p(x) is given by µX := EX [φ(X)] = ∫ φ(x)p(x)dx where the distribution is mapped to its expected feature map. By assuming that there exists a feature space such that
the embeddings are injective, we can treat the embedding µX of the density p(x) as a sufficient statistic of the density, i.e., any information we need from the density is preserved in µX (Smola et al., 2007). Such injective assumption generally holds under mild condition (Sriperumbudur et al., 2008). This property is important since we can reformulate a functional f : P → R of p(·) using the embedding only, i.e., f(p(x)) = f̃(µX). It also can be generalized to the operator case. In particular, applying an operator T : P → Rd to a density can be equivalently carried out using its embedding T ◦ p(x) = T̃ ◦ µX , where T̃ : F → Rd is the alternative operator working on the embedding. In practice, µX , f̃ and T̃ have complicated dependence on φ. As such, we approximate them by neural networks, which is known as the neural embedding approach of distribution (Dai et al., 2016).
4 OUR METHOD
In this section, we present our method intention propagation for the collaborative multi-agent reinforcement learning. To begin with, we formally define the problem as a networked MDP. The network is characterized by a graph G = (V, E), where each vertex i ∈ V represents an agent and the edge ij ∈ E means the communication link between agent i and j. We say i,j are neighbors if they are connected by this edge. The corresponding networked MDP is characterized by a tuple ({Si}Ni=1, {Ai}Ni=1, p, {ri}Ni=1, γ,G), where N is the number of agents, Si is the local state of the agent i andAi denotes the set of action available to agent i. We let S := ∏N i=1 Si andA := ∏N i=1Ai be the global state and joint action space respectively. At time step t+1, the global state st+1 ∈ S is drawn from the transition st+1 ∼ p(·|st,at), conditioned on the current state st and the joint action at = (at1, a t 2, ..., a t N ) ∈ A. Each transition yields a reward rti = ri(st,at) for agent i and γ is the discount factor. The aim of our algorithm is to learn a joint policy π(at|st) to maximize the overall long term reward (with an entropy termH(·|s) on the joint action a)
η(π) = E[ ∞∑ t=0 γt( N∑ i=1 rti +H(·|st))],
where each agent i can just observe its own state si and the message from the neighborhood communication. We denote the neighbors of agent i asNi and further assume that the reward ri depends on the state and the actions of itself and its neighbors, i.e., ri(s,a) := ri(s, ai, aNi). Such assumption is reasonable in many real scenarios as we discussed in the introduction. In the following, we start the derivation with the fully observation case, and discuss how to handle the partial observation later. The roadmap of the following derivation : At the beginning, we prove that the optimal policy has a Markov Random Field (MRF) form, which reduces the exponential large searching space to a polynomial one. However implement a MRF policy is not trivial in the RL setting (e.g., sample an action from the policy). Thus we sort to the varational inference method (focus on mean field approximation in the main paper and leave other methods in the appendix). But it would introduce complicated computations. At last we apply the kernel embedding method introduced in section 3 to solve this problem and learn the kernel embedding by neural networks. We also discuss how to handle the partially observable setting.
4.1 REDUCE POLICY SEARCHING SPACE
Recall that our aim is to maximize the long term reward with the entropy term. Therefore, we follow the definition of the optimal policy in the probabilistic reinforcement learning in (Levine, 2018) and obtain the proposition 1. It says under the assumption ri(s, a) = ri(s, ai, aNi), the optimal policy is in the form of Markov Random Field (MRF). We prove the following proposition in I.1.
Proposition 1 The optimal policy has the form π∗(at|st) = 1Z exp( ∑N i=1 ψi(s t, ati, a t Ni)), where
Z is the normalization term.
This proposition is important since it suggests that we should construct the policy π(at|st) with this form, e.g., a parametric family, to contain the optimal policy. If agent i and its neighbors compose a clique, the policy reduces to a MRF and ψ is the potential function. One common example is that the reward is a function on pairwise actions, i.e., r(s,a) = ∑ i∈V r(s, ai) + ∑ (i,j)∈E r(s, ai, aj). Then the policy has the form
π(a|s) = 1 Z exp( ∑ i∈V ψ̃i(s, ai) + ∑ (i,j)∈E ψ̃i,j(s, ai, aj)),
which is the pairwise MRF. For instance, in traffic lights control, we can define a 2-D grid network and the pairwise reward function. The MRF formulation on the policy effectively reduces the policy space comparing with the exponentially large one in the fully connected graph.
A straightforward way to leverage such observation is to define a πθ(at|st) as a MRF, and then apply the policy gradient algorithm, e.g., the following way in SAC. ∇θEst∼DEat∼πθ [log πθ(at|st) − Qκ(s
t,at)]. However it is still very hard to sample joint action at from πθ(at|st). In the next section, we resort to embedding to alleviate such problem.
Recall the remaining problem is how to sample the joint action from a MRF policy. Classical ways include the Markov Chain Monte Carlo method and variational inference. The former provides the guarantee of producing exact samples from the target density but computationally intensive. Therefore it is not applicable in the multi-agent RL setting, since we need to sample action once in each interaction with the environment. As such, we advocate the second approach. Here we use the mean-field approximation for the simplicity of presentation and defer more variational inference methods, e.g., loopy belief propagation, in Appendix B.2. We use an intention propagation network with the embedding of the distribution to represent the update rule of the mean field approximation.
Mean field approximation. We hope to approximate the π∗(a|s) by the mean-field variational family pi
min (p1,p2,...,pN ) KL( N∏ i=1 pi(ai|s)||π∗(a|s)),
where we omit the superscript t to simplify the notation. We denote the optimal solution of above problem as qi. Using the coordinate ascent variational inference,the optimal solution qi should satisfy the following fixed point equation (Bishop, 2006). Since the objective function is (generally) non-convex, such update converges to a local optimum (Blei et al., 2017).
qi(ai|s) ∝ exp ∫ ∏
j 6=i
qj(aj |s) log π∗(a|s)da. (2)
For simplicity of the representation, in the following discussion, we assume that the policy is a pairwise MRF but the methodology applies to more general case with more involved expression. Particularly, we assume π∗(a|s) = 1Z exp( ∑ i∈V ψi(s, ai) + ∑ (i,j)∈E ψij(s, ai, aj)). We plug this into equation 2 and obtain following fixed point equation.
log qi(ai|s) = ci + ψi(s, ai) + ∑ j∈Ni ∫ qj(aj |s)ψij(s, ai, aj)daj , (3)
where ci is some constant that does not depend on ai.
We can understand this mean-field update rule from the perspective of intention propagation. Equation 3 basically says each agent can not make the decision independently. Instead its policy qi should depend on the policies of others, particularly the neighbors in the equation. Clearly, if we can construct the intention propagation corresponding to equation 3, the final policy obtained from intention propagation will converge to the mean-field approximation of the joint policy. However we can not directly apply this update in our algorithm, since it includes a complicated integral. To this end , in the next section we resort to the embedding of the distribution qi (Smola et al., 2007) , which maps the distributions into a reproducing kernel Hilbert space.
Embed the update rule. Observe that the fixed point formulation equation 3 says that qi(ai|s) is a functional of neighborhood marginal distribution {qj(aj |s)}j∈Ni , i.e., qi(ai|s) = f(ai, s, {qj}j∈Ni). Denote the d-dimensinoal embedding of qj(aj |s) by µ̃j =∫ qj(aj |s)φ(aj |s)daj . Notice the form of feature φ is not fixed at the moment and will be learned implicitly by the neural network. Following the assumption that there exists a feature space such that the embeddings are injective in Section 3, we can replace the distribution by its embedding and have the fixed point formulation as
qi(ai|s) = f̃(ai, s, {µ̃j}j∈Ni). (4)
For more theoretical guarantee on the kernel embedding, e.g., convergence rate on the empirical mean of the kernel embedding, please refer to (Smola et al., 2007). Roughly speaking, once there
are enough data, we can believe the learned kernel embedding is close enough to the true kernel embedding. Therefore the update of equation 4 and equation 5 in the following would converge to the fixed point of equation 2. Remind that in section 3 at both sides we can do integration w.r.t. the feature map φ, which yields, µ̃i = ∫ qi(ai|s)φ(ai|s)dai = ∫ f̃(ai, s, {µ̃j}j∈Ni)φ(ai|s)dai. Thus we can rewrite it as a new operator on the embedding, which induces a fixed point equation again µ̃i = T̃ ◦ (s, {µ̃j}j∈Ni). In practice, we do this fix-point update with M iterations.
µ̃mi ← T̃ ◦ (s, {µ̃m−1j }j∈Ni) m = 1, ...,M. (5)
Finally, we output the distribution qi with qi(ai|s) = f̃(ai, s, {µ̃Mj }j∈Ni). In next section, we show how to represent these variables by neural network.
Parameterization by Neural Networks. In general f̃ and T̃ have complicated dependency on ψ and φ. Instead of learning such dependency, we directly approximate f̃ and T̃ by neural networks. For instance, we can represent the operator T̃ in equation 5 by µ̃i = σ(W1s+W2 ∑ j∈Ni µ̃j), where σ is a nonlinear activation function, W1 and W2 are some matrixes with row number equals to d. Interestingly, this is indeed a message passing form of Graph Neural Network (GNN) (Hamilton et al., 2017). Thus we can use M -hop (layer) GNN to represent the fixed-point update in equation 5. If the action space is discrete, the output qi(ai|s) is a softmax function. In this case f̃ is a fully connected layer with softmax output. When it is continuous, we can output a Gaussian distribution with the reparametrization trick (Kingma & Welling, 2019). We denote this intention propagation procedure as intention propagation network Λθ(a|s) with parameter θ in Figure 1(b). Figure 1(a) illustrates the graph and the message passing procedure. Agent 1 receives the embedding (intention) µ̃m−12 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ and spreads its new embedding µ̃m1 at the next iteration. Figure 1(b) gives the details on the parameterization of GNN. Here we use agent 1 as an example. To ease the exposition, we assume agent 1 just has one neighbor, agent 2. Each agent observes its own state si. After a MLP and softmax layer (we do not sample actions here, but just use the probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. Then agent 1 receives the embedding µ̃02 of its neighbor (agent 2). After a GNN layer to combine the information, e.g, µ̃ 1 1 = Relu[W1(s1 + s2) +W2(µ̃ 0 1 + µ̃ 0 2)](W1,W2 are shared across all agents as that in GNN), we obtain new embedding µ̃11 of agent 1. Notice we also do message passing on state, since in practice the global state is not available. In the second layer, we do similar things. We defer detailed discussion and extension to other neural networks to Appendix B due to space constraint.
4.2 ALGORITHM
We are ready to give the overall algorithm by combining all pieces together. All detailed derivation on Vi, Qi for agent i and the corresponding loss function will be given in the appendix I, due to the space constraint. Recall we have a mean-field approximation qi of the joint-policy, which is obtained by M iterations of intention propagation. We represent this procedure by a M-hop graph neural network with parameter θ discussed above. Notice that this factorization is different from the case π(a|s) = ∏N i=1 π(ai|s) in (Zhang et al., 2018; Foerster et al., 2018), since qi(ai|s) depends on the information of other agents’ plan. Using the mean field approximation qi, we can further decompose Q = ∑N i=1Qi and V = ∑N i=1 Vi, see appendix I. We use neural networks to approximate Vi and Qi function with parameter ηi and κi respectively. As that in TD3 (Fujimoto et al., 2018), for each agent i we have a target value network Vη̄i and two Qκi functions to mitigate the overestimation by training them simultaneously with the same data but only selecting minimum of them as the
target in the value update. In the following we denote qi(ai|s) as qi,θ(ai|s) to explicitly indicate its dependence on the intention propagation network Λθ. We use D to denote the replay buffer. The whole algorithm is presented in Algorithm 1.
Loss Functions. The loss of value function Vi:
J(ηi) = Est∼D[ 1
2
( Vηi(s
t)− E(ati,atNi )∼(qi,qNi )[Qκi(s t, ati, a t Ni)− log qi,θ(a t i|st)]
)2 ].
The loss of Qi: J(κi) = E(st,ati,atNi )∼D[ 1 2
( Qκi(s t, ati, a t Ni)− Q̂i(s t, ati, a t Ni) )2 ],
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(·|st,at)[Vη̄i(s t+1)].
The loss of policy: J(θ) = Est∼D,at∼∏Ni=1 qi [ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)].
It is interesting to compare the loss with the counterpart in the single agent SAC in section 3.
• qi,θ(ai|s) is the output of intention propagation network Λθ(a|s) parameterized by a graph neural network. Thus it depends on the policy of other agents. • Qκi depends on the action of itself and its neighbors, which can also be accomplished by the graph neural network in practice.
Algorithm 1 Intention Propagation Inputs: Replay buffer D. Vi, Qi for each agent i. Intention propagation network Λθ(at|s) with outputs {qi,θ}Ni=1. Learning rate lη, lκ,lθ. Moving average parameter τ for the target network for each iteration do
for each environment step do sample at ∼ ∏ qi,θ(a t i|st) from the intention propagation network. st+1 ∼ p(st+1|st,at),
D ← D ⋃(
sti, a t i, r t i , s t+1 i )N i=1
end for for each gradient step do
update ηi, κi, θ, η̄i. ηi ← ηi − lη∇J(ηi), κi ← κi − lκ∇J(κi) θ ← θ − lθ∇J(θ), η̄i ← τηi + (1− τ)η̄i
end for end for
Handle the Partial Observation: So far, we assume that agents can observe global state while in practice, each agent just observes its own state si. Thus besides the communication with the intention propagation, we also do the message passing on the state embedding with the graph neural network. The idea of this local state sharing is similar to (Jiang et al., 2020), while the whole structure of our work is quite different from (Jiang et al., 2020). See the discussion in the related work.
5 EXPERIMENT
In this section, we evaluate our method and eight state-of-the-art baselines on more than ten different scenarios from three popular MARL platforms: (1) CityFlow, a traffic signal control environment
(Tang et al., 2019). It is an advanced version of SUMO (Lopez et al., 2018) widely used in MARL community. (2) multiple particle environment (MPE) (Mordatch & Abbeel, 2017) and (3) grid-world platform MAgent (Zheng et al., 2018). Our intention propagation (IP) empirically outperforms all baselines on all scenarios especially on the large scale problem.
5.1 SETTINGS
We give a brief introduction to the settings of the experiment and defer the details such as hyperparameter tuning of intention propagation and baselines to appendix D. Notice all algorithms are tested in the partially observable setting, i.e., each agent just can observe its own state si.
In traffic signal control problem (Left panel in Figure 2), each traffic light at the intersection is an agent. The goal is to learn policies of traffic lights to reduce the average waiting time to alleviate the traffic jam. Graph for cityflow: graph is a 2-D grid induced by the map (e.g. Figure 2). The roads are the edges which connects the agents. We can define the cost −ri as the traveling time of vehicle around the intersection i, thus the total cost indicates the average traveling time. Obviously, ri has a close relationship with the action of neighbors of agent i but has little dependence on the traffic lights far away. Therefore our assumption on reward function holds. We evaluate different methods on both real-world and synthetic traffic data under the different numbers of intersections.
MPE (Mordatch & Abbeel, 2017) and MAgent (Zheng et al., 2018) (Figure 2) are popular particle environments on MARL (Lowe et al., 2017; Jiang et al., 2020). Graph for particle environments : for each agent, it has connections (i.e., the edge of the graph) with k nearest neighbors. Since the graph is dynamic, we update the adjacency matrix of the graph every n step, e.g., n = 5 steps. It is just a small overhead comparing with the training of the neural networks. The reward functions also have local property, since they are explicitly or implicitly affected by the distance between agents. For instance, in heterogeneous navigation, if small agents collide with big agents, they will obtain a large negative reward. Thus their reward depends on the action of the nearby agents. Similarly, in the jungle environment, agent can attack the agents nearby to obtain a high reward.
Baselines. We compare our method against eight different baselines mentioned in introduction and related work section: QMIX (Rashid et al., 2018); MADDPG (Lowe et al., 2017); permutation invariant critic (PIC) (Liu et al., 2019); graph convolutional reinforcement learning (DGN) (Jiang et al., 2020); independent Q-learning (IQL) (Tan, 1993); permutation invariant MADDPG with data shuffling mechanism (MADDPGS); COMA (Foerster et al., 2018); MFQ (Yang et al., 2018). These baselines are reported as the leading algorithm of solving tasks in CityFlow, MPE and MAgent. Among them, DGN and MFQ need the communication with neighbors in the training and execution. Also notice that PIC assumes the actor can observe the global state. Thus in the partially observable setting, each agent in PIC also needs to communicate to get the global state information in the training and the execution. Further details on baselines are given in appendix E.1.
Neural Network and Parameters. Recall the intention propagation network is represented by GNN. In our experiment, our graph neural network has hop = 2 (2 GNN layers, i.e., M = 2) and 1 fully-connected layer at the top. Each layer contains 128 hidden units. Other hyperparameters are listed in appendix H.
5.2 COMPARISON TO STATE-OF-THE-ART
In this section, we compare intention propagation (IP) with other baselines. The experiments are evaluated by average episode reward (Lowe et al., 2017). For CityFlow tasks, average reward refers
to negative average travel time. All experiments are repeated for 5 runs with different random seeds. We report the mean and standard deviation in the curves. We report the results on six experiments and defer all the others to appendix G due to the limit of space.
CityFlow. We first evaluate our algorithm on traffic control problem. Particularly, we increase the number of intersections (agents) gradually to increase the difficulties of the tasks. Figure 3 presents the performance of different methods on both real-world and synthetic CityFlow data with different number of intersections. On the task of Manhattan City, intention propagation (IP) method, the baseline methods PIC and DGN achieve better reward than the other methods while our method approaches higher reward within fewer steps. On the larger task (N=100), both PIC and DGN have large variance and obtain poor performance. The experiment with N=1225 agents is an extremely challenging task. Our algorithm outperforms all baselines with a wide margin. The runner-up is MADDPG with data shuffling mechanism. Its final performance is around −4646 and suffers from large variance. In contrast, the performance of our method is around −569 (much higher than the baselines). It’s clear that, in both real-world and synthetic cityflow scenarios, the proposed IP method obtains the best performance. We defer further experimental results to appendix G.
MPE and MAgent. Figure 4 demonstrates the performance of different methods on other three representative scenario instances: a small task cooperative navigation (N=30) and two large-scale tasks heterogeneous navigation (N=100) and prey and predator (N=100). We run all algorithms long enough (more than 1e6 steps). In all experiments, our algorithm performs best. For cooperative navigation, MADDPGS performs better than MADDPG. The potential improvement comes from data-shuffling mechanism, which makes MADDPGS more robust to handle the manually specified order of agents. QMIX performs much better than MADDPG, MADDPGS and IQL. However, its performance is not stable even on small setting (N=30). DGN is better and more stable than QMIX. However, when solving large-scale settings, its performance is much worse than PIC and our intention propagation (IP). Although PIC can solve large-scale tasks, our IP method is still much better. In prey and predator, there are two groups of agents: good agents and adversaries. To make a fair comparison of rewards of different methods, we fix good agents’ policies and use all the methods to learn the adversaries’ policies. Such setting is commonly used in many articles (Lowe et al., 2017; Liu et al., 2019).
Stability. Stability is a key criterion to evaluate MARL. In all experiments, our method is quite stable with small variance. For instance, as shown in Figure 3 (b), DGN approaches −1210± 419 on the CityFlow scenario with N=100 intersections while our method approaches −465 ± 20 after 1.6× 106 steps (much better and stable). The reason is that to make the joint decision, the agent in our algorithm can adjust its own policy properly by considering other agents’ plans.
Ablation Study: We conduct a set of ablation studies related to the effect of joint policy, graph, hop size, number of neighbors and the assumption of the reward function. Particularly, we find the joint policy is essential for the good performance. In Cityflow, the performance of traffic graph (2-d grid induced by the roadmap) is better than the fully connected graph. In MPE and MAgent, We define the adjacent matrix based on the k nearest neighbors and pick k = 8 in large scale problem and k = 4 in small scale problem. In all of our experiment, we choose the 2-hop GNN. Because of the limitation of space, we just summarize our conclusion here and place the details in appendix F.
A ORGANIZATION OF THE APPENDIX
In appendix B, we give the details on the intention propagation network and parameterization of the GNN. We explain intention propgation from the view of the MARL. At last, we extend the intention propagation to other approximations which converges to other solutions of the variational inference. Notice such extension on the algorithm can also be easily parameterized by neural networks.
In Appendix C, we give the details of the algorithm deferred from the main paper. Appendix D summarizes the configuration of the experiment and MARL environment. Appendix E gives more details on baselines and the hyperparameters of GNN used in our model. Appendix F conducts the ablation study deferred from the main paper. Appendix G and H give more experimental results and hyperparameters used in the algorithms. At appendix I, we derive the algorithm and prove the proposition 1.
B INTENTION PROPAGATION NETWORK
B.1 DETAILS ON THE INTENTION PROPAGATION NETWORK
In this section, we give the details on the intention propagation network deferred from the main paper. We first illustrate the message passing of the intention propagation derived in section 4.1. Then we give a details on how to construct graph neural network.
Message passing and explanation from the view of MARL: µ̃i is the embedding of policy of agent i, which represents the intention of the agent i. At 0 iteration, every agent makes independent decision. The policy of agent i is mapped into its embedding µ̃0i . We call it the intention of agent i at iteration 0. Then agent i sends its plan to its neighbors . In Figure 5, µ̃mi is the d dimensional (d = 3 in this figure) embedding of qi at m−th iteration of intention propagation. We draw the update of µ̃(m)1 as example. Agent 1 receives the embedding (intention) µ̃ m−1 2 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ . After M iterations, we obtain µ̃M1 and output the policy distribution q1 using equation 4. Similar procedure holds for other agents. At each RL step t, we do this procedure (with M iterations) once to generate joint policy. M in general is small, e.g., M = 2 or 3. Thus it is efficient.
Parameterization on GNN: We then illustrate the parameterization of graph neural network in Figure 6. If the action space is discrete, the output qi(ai|s) is a softmax function. When it is continuous, we can output a Gaussian distribution (mean and variance) with the reparametrization trick (Kingma & Welling, 2019). Here, we draw 2-hop (layer) GNN to parameterize it in discrete action intention propagation. In Figure 6 (b), each agent observe its own state si. After a MLP and softmax layer (we do not sample here, and just use the output probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. In the following, we use agent 1 as an example. To ease the exposition, we assume Agent 1 just has one neighbor, agent 2. Agent 1 receives the embedding µ̃02 of its neighbor. After a GNN layer to combine the information, e.g, Relu[W1(s1 + s2) + W2(µ̃ 0 1 + µ̃ 0 2)], we obtain new embedding µ̃ 1 1 of agent 1. Notice we also do
message passing on state, since in practice the global state is not available. In the second layer, we do similar things. Agent 1 receives the embedding information of µ̃12 from its neighbors and get a new embedding µ̃21. Then this embedding passes a MLP+softmax layer and output probability of action, i.e. q1(a1|s).
B.2 EXTENSION TO OTHER VARIATIONAL INFERENCE METHODS AND NEURAL NETWORKS
In this section, we show how to approximate the joint policy with the Loopy Belief Propagation in the variational inference (Yedidia et al., 2001). This will lead to a new form of neural networks beyond vanilla GNN that we illustrate above.
The objective function in Loop Belief Propagation is the Beth Free energy (Yedidia et al., 2001). Different from the mean-field approximation, it introduces another variational variable qij , which brings more flexibility on the approximation. The following is objective function in our case.
min qi,qij∈E − ∑ i (|Ni| − 1) ∫ qi(ai|s) log qi(ai|s) ψi(s, ai) dai
+ ∑ ij ∫ qij(ai, aj |s) log qij(ai, aj |s) ψij(s, ai, aj)ψi(s, ai)ψj(s, aj) daidaj .
s.t. ∫ qij(ai, aj |s)daj = qi(aj |s), ∫ qij(ai, aj |s)dai = qj(aj |s)
(6)
Solve above problem, we have the fixed point algorithm mij(aj |s)← ∫ ∏
k∈Ni\j
mki(ai|s)ψi(s, ai)ψij(s, ai, aj)dai,
qi(ai|s)← ψi(s, ai) ∏ j∈Ni mji(ai|s).
Similar to the mean-field approximation case, we have mij(aj |s) = f(aj , s, {mki}k∈Ni\j), qi(ai|s) = g(ai, s, {mki}k∈Ni),
It says the message mij and marginals qi are functionals of messages from neighbors. Denote the embedding ν̃ij = ∫ ψj(s, aj)mij(aj |s)daj and µ̃i = ∫ ψi(s, ai)qi(ai|s)dai, we have
ν̃ij = T̃ ◦ ( s, {ν̃ki}k∈Ni\j ) , µ̃i = T̃ ◦ ( s, {ν̃ki}k∈Ni ) .
Again, we can parameterize above equation by (graph) neural network ν̃ij = σ ( W1s +
W2 ∑ k∈Ni\j ν̃ki ) , µ̃i = σ ( W3s+W4 ∑ k∈Ni ν̃ki ) .
Following similar way, we can derive different intention propagation algorithms by changing different objective function which corresponds to e.g., double-loop belief propagation(Yuille, 2002), tree-reweighted belief propagation (Wainwright et al., 2003) and many others.
C ALGORITHM
We present some remarks of the algorithm Intention Propagation (algorithm 1) deferred from the main paper.
Remark: To calculate the loss function J(ηi), each agent need to sample the global state and (ai, aNi). Thus we first sample a global state from the replay buffer and then sample all action a once using the intention propagation network.
D FURTHER DETAILS ABOUT ENVIRONMENTS AND EXPERIMETAL SETTING
Table 1 summarizes the setting of the tasks in our experiment.
D.1 CITYFLOW
CityFlow (Tang et al., 2019) is an open-source MARL environment for large-scale city traffic signal control 1. After the traffic road map and flow data being fed into the simulators, each vehicle moves from its origin location to the destination. The traffic data contains bidirectional and dynamic flows with turning traffic. We evaluate different methods on both real-world and synthetic traffic data. For real-world data, we select traffic flow data from Gudang sub-district, Hangzhou, China and Manhattan, USA 2. For synthetic data, we simulate several different road networks: 7 × 7 grid network (N = 49) and large-scale grid networks with N = 10 × 10 = 100 , 15 × 15 = 225, 35 × 35 = 1225. Each traffic light at the intersection is the agent. In the real-world setting (Hang Zhou, Manhattan), the graph is a 2-d grid induced by the roadmap. Particularly, the roads are edges which connect the node (agent) of the graph. For the synthetic data, the map is a n ∗ n 2-d grid (Something like Figure 7), where edges represents road, node is the traffic light. We present the experimental results deferred from the main paper in Figure 10.
D.2 MPE
In MPE (Mordatch & Abbeel, 2017) 3, the observation of each agent contains relative location and velocity of neighboring agents and landmarks. The number of visible neighbors in an agent’s observation is equal to or less than 10. In some scenarios, the observation may contain relative location and velocity of neighboring agents and landmarks.
1https://github.com/cityflow-project/CityFlow 2We download the maps from https://github.com/traffic-signal-control/ sample-code. 3To make the environment more computation-efficient, Liu et al. (2019) provided an improved version of MPE. The code are released in https://github.com/IouJenLiu/PIC.
We consider four scenarios in MPE. (1) cooperative navigation: N agents work together and move to cover L landmarks. If these agents get closer to landmarks, they will obtain a larger reward. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and N agents. The observation dimension is 26. (2) prey and predator: N slower cooperating agents must chase the faster adversaries around a randomly generated environment with L large landmarks. Note that, the landmarks impede the way of all agents and adversaries. This property makes the scenario much more challenging. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and 5 preys. The observation dimension is 34. (3) cooperative push: N cooperating agents are rewarded to push a large ball to a landmark. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 28. (4) heterogeneous navigation: this scenario is similar with cooperative navigation except dividing N agents into N2 big and slow agents and N 2 small and fast agents. If small agents collide with big agents, they will obtain a large negative reward. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 26.
Further details about this environment can be found at https://github.com/IouJenLiu/ PIC.
D.3 MAGENT
MAgent (Zheng et al., 2018) is a grid-world platform and serves another popular environment platform for evaluating MARL algorithms. Jiang et al. (2020) tested their method on two scenarios: jungle and battle. In jungle, there are N agents and F foods. The agents are rewarded by positive reward if they eat food, but gets higher reward if they attack other agents. This is an interesting scenario, which is called by moral dilemma. In battle, N agents learn to fight against several enemies, which is very similar with the prey and predator scenario in MPE. In our experiment, we evaluate our methods on jungle.
In our experiment, the size for the grid-world environment is 30 × 30. Each agent refers to one grid and can observe 11 × 11 grids centered at the agent and its own coordinates. The actions includes moving and attacking along the coordinates. Further details about this environment can be found at https://github.com/geek-ai/MAgent and https://github.com/ PKU-AI-Edge/DGN.
E FURTHER DETAILS ON SETTINGS
E.1 DESCRIPTION OF OUR BASELINES
We compare our method with multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), a strong actor-critic algorithm based on the framework of centralized training with decentralized execution; QMIX (Rashid et al., 2018), a q-learning based monotonic value function factorisation algorithm; permutation invariant critic (PIC) (Liu et al., 2019), a leading algorithm on MPE yielding identical output irrespective of the agent permutation; graph convolutional reinforcement learning (DGN) (Jiang et al., 2020), a deep q-learning algorithm based on deep convolutional graph neural network with multi-head attention, which is a leading algorithm on MAgent; independent Q-learning (IQL) (Tan, 1993), decomposing a multi-agent problem into a collection of simultaneous single-agent problems that share the same environment, which usually serves as a surprisingly strong benchmark in the mixed and competitive games (Tampuu et al., 2017). In homogeneous settings, the input to the centralized critic in MADDPG is the concatenation of all agent’s observations and actions along the specified agent order, which doesn’t hold the property of permutation invariance. We follow the similar setting in (Liu et al., 2019) and shuffle the agents’ observations and actions in training batch 4. In COMA (Foerster et al., 2018), it directly assume the poilcy is factorized. It calculates the counterfactual baseline to address the credit assignment problem in MARL. In our experiment, since we can observe each reward function, each agent can directly approximate the Q function without counterfactual baseline. MFQ derives the algorithm from the view of mean-field game(Yang et al., 2018). Notice the aim of mean-field game is to find the Nash equilibrium rather
4This operation doesn’t change the state of the actions.
than maxmization of the total reward of the group. Further more, it needs the assumption that agents are identical.
E.2 NEURAL NETWORKS ARCHITECTURE
To learn feature from structural graph build by the space distance for different agents, we design our graph neural network based on the idea of a strong graph embedding tool structure2vec (Dai et al., 2016), which is an effective and scalable approach for structured data representation through embedding latent variable models into feature spaces. Structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. After using M graph neural network layers, each node can receive the information fromM -hops neighbors by message passing. Recently, attention mechanism empirically leads to more powerful representation on graph data (Veličković et al., 2017; Jiang et al., 2020). We employ this idea into our graph neural network. In some settings, such as heterogeneous navigation scenario from MPE, the observations of different group of agents are heterogeneous. To handle this issue, we use different nonlinear functions to extract the features from heterogeneous observations and map the observations into a latent layer, then use the same graph neural networks to learn the policy for all types of agents. In our experiment, our graph neural network has M = 2 layers and 1 fully-connected layer at the top. Each layer contains 128 hidden units.
F ABLATION STUDIES
F.1 INDEPENDENT POLICY VS INTENTION PROPAGATION.
We first give a toy example where the independent policy (without communication) fails. To implement such algorithm, we just replace the intention propagation network by a independent policy network and remain other parts the same. Think about a 3× 3 2d-grid in Figure 7 where the global state (can be observed by all agents) is a constant scalar (thus no information). Each agent chooses an action ai = 0 or 1. The aim is to maximize a reward−(a1−a2)2−(a1−a4)2−(a2−a3)2− ...− (a8−a9)2, (i.e., summation of the reward function on edges). Obviously the optimal value is 0. The optimal policy for agents is a1 = a2 =, ..., a9 = 0 or a1 = a2 =, ..., a9 = 1. However independent policy fails, since each agents does not know how its allies pick the action. Thus the learned policy is random. We show the result of this toy example in Figure 7, where intention propagation learns optimal policy.
F.2 GRAPH TYPES, NUMBER OF NEIGHBORS, AND HOP SIZE
We conduct a set of ablation studies related to graph types, the number of neighbors, and hop size. Figure 8(a) and Figure 8(b) demonstrate the performance of our method on traffic graph and fullyconnected graph on the scenarios (N=49 and N=100) of CityFlow. In the experiment, each agent can only get the information from its neighbors through message passing (state embedding and the policy embedding). The result makes sense, since the traffic graph represents the structure of the
map. Although the agent in the fully connected graph would obtain global information, it may introduce irrelevant information from agents far away.
Figure 8(c) and Figure 8(d) demonstrate the performance under different number of neighbors and hop size on cooperative navigation (N=30) respectively. The algorithm with neighbors=8 has the best performance. Again the the fully connected graph (neighbors=30) may introduce the irrelevant information of the agents far away. Thus its performance is worse than the algorithm with graph constructed by the K-nearest neighbor. In addition the fully connected graph introduces more computations in the training. In Figure 8(d), we increase the hop-size from 1 to 3. The performance of IP with hop=2 is much better than that with hop=1. While IP with hop=3 is just slightly better than that with hop=2. It means graph neural network with hop size =2 has aggregated enough information.
In Figure 8(e), we test the importance of the k-nearest neighbor structure. IP(neighbors=3)+random means that we pick 3 agents uniformly at random as the neighbors. Obviously, IP with K-nearest neighbors outperforms the IP with random graph a lot. In Figure 8(f), we update adjacency matrix every 1, 5, 10 steps. IP(neighbors=8) denotes that we update the adjacency matrix every step, while IP(neighbors=8)+reset(5) and IP(neighbors=8)+reset(10) denote that we update adjacency matrix every 5 and 10 steps respectively. Obviously, IP(neighbors=8) has the best result. IP(neighbors=8)+reset(5) is better than IP(neighbors=8)+reset(10). The result makes sense, since the adjacency matrix is more accurate if the update interval is smaller.
F.3 ASSUMPTION VIOLATION
The aforementioned experimental evaluations are based on the mild assumption: the actions of agents that are far away would not affect the learner because of their physical distance. It would be interesting to see the performance where the assumption is violated. As such, we modify the reward in the experiment of cooperative navigation. In particular, the reward is defined by r = r1 + r2, where r1 encourages the agents to cover (get close to) landmarks and r2 is the log function of the distances between agents (farther agents have larger impact). To make a violation, we let r2 dominate the reward. We conduct the experiments with hop = 1, 2, 3. Figure 9 shows that the rewards obtained by our methods are 4115 ± 21, 4564 ± 22, and 4586 ± 25 respectively. It’s expected in this scenario, since we should use large hop to collect information from the far-away agents.
G FURTHER EXPERIMENTAL RESULTS
For most of the experiments, we run them long enough with 1 million to 1.5 million steps and stop (even in some cases our algorithm does not converge to the asymptotic result), since every experment in MARL may cost several days. We present the results on Cityflow in Figure 10. Figure 11 provides the experimental results on the cooperative navigation instances with N = 15, N = 30 and N = 200 agents. Note that, the instance with N = 200 is a large-scale and challenging multiagents reinforcement learning setting (Chen et al., 2018; Liu et al., 2019), which typically needs several days to run millions of steps. It’s clear that IQL, MADDPG, MADDPG perform well in the small setting (N=15), however, they failed in large-scale instances (N = 30 and N = 200). In the instance withN = 30, MADDPGS performs better than MADDPG. The potential reason is that with the help of shuffling, MADDPGS is more robust to handle the manually specified order of agents. Although QMIX performs well in the instance of N = 15 and N = 30, it has large variances in both settings. DGN using graph convolutional network can hold the property of permutation invariance, it obtains much better performance than QMIX on these two settings. However, it also fails to solve the large-scale settings with N = 200 agents. Empirically, after 1.5 × 106 steps, PIC obtains a large reward (−425085 ± 31259) on this large-scale setting. Despite all these, the proposed intention propagation (IP) approaches −329229 ± 14730 and is much better than PIC. Furthermore, Figure 11 shows the results of different methods on (d) jungle (N=20, F=12) and (e) prey and predator (N=100). The experimental results shows our method can beats all baselines on these two tasks. On the scenario of cooperative push (N=100) as shown in Figure 11(f), it’s clear that DGN, QMIX, IQL, MADDPG and MADDPGS all fail to converge to good rewards after 1.5× 106 environmental steps. In contrast, PIC and the proposed IP method obtain much better rewards than these baselines. Limited by the computational resources, we only show the long-term performance of the best two methods. Figure 11(f) shows that IP is slightly better than PIC in this setting.
G.1 POLICY INTERPRETATION
Explicitly analyzing the policy learned by deep multi-agent reinforcement learning algorithm is a challenging task, especially for the large-scale problem. We follow the similar ideas from (Zheng et al., 2019) and analyze the learned policy on CityFlow in the following way: We select the same period of environmental steps within [210000, 1600000] and group these steps into 69 intervals (each interval contains about 20000 steps). We compute the ratio of vehicle volume on each movement and the sampled action volume from the learned policy (each movement can be assigned to one action according to the internal function in CityFlow). We define the ratio of vehicle volume over all movements as the vehicle volume distribution and define the ratio of the sampled action volume from the learned policy over all movements as the sampled action distribution. It’s expected that a good MARL algorithm will hold the property: these two distributions will very similar over a period of time. Figure 12 reports their KL divergence by intervals. It’s clear that the proposed intention propagation method (IP) obtains the lowest KL divergence (much better than the state-of-the-art baselines). Because KL divergence is not symmetrical metric, we also calculate their Euclidean distances. Specifically, the distance of our method is 0.0271 while DGN is 0.0938 and PIC is 0.0933.
H HYPERPARAMETERS
The parameter on the environment. For the max episode length, we follow the similar settings like that in the baselines (Lowe et al., 2017) . Particularly, we set 25 for MPE and set 100 for CityFlow. For MAgent, we find that setting the max episode length by 25 is better than 100. All the methods share the same setting.
We list the range of hyperparameter that we tune in all baselines and intention propagation. γ : {0.95, 0.98, 0.99, 0.999}, learning rate : {1, 5, 10, 100}×1e-4. activation function: {relu, gelu, tanh}, batch size:{128, 256, 512, 1024}, gradient steps: {1, 2, 4, 8}. Number of hidden units in MLP: {32, 64, 128, 256, 512}, number of layers in MLP:{1, 2, 3} in all experiment. In Qmix, GRU hidden unites are {64, 128}. A fully connected layer is before and after GRU. Hypernetwork and mixing network are both single layer network(64 hidden units with Relu activation from the Qmix paper). The parameter of intention propagation is reported in Table.2.
I DERIVATION
I.1 PROOF OF PROPOSITION 1
We prove the result by induction using the backward view. To see that, plug r(st,at) = ∑N i=1 ri(s t, ati, a t Ni) into the distribution of the optimal policy defined in section 3.
p(τ) = [p(s0) T∏ t=0 p(st+1|st,at)] exp T∑ t=0 N∑ i=1 ri(s t, ati, a t Ni)
Recall the goal is to find the best approximation of π(at|st) such that the trajectory distribution p̂(τ) induced by this policy can match the optimal trajectory probability p(τ). Thus we minimize the KL divergence between them minπDKL(p̂(τ)||p(τ)), where p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st,at)π(at|st). We can do optimization w.r.t. π(at|st) as that in (Levine, 2018) and obtain a backward algorithm on the policy π∗(at|st) (See equation 13 in I.2.)
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (7)
Using the result equation 7, when t = T , the optimal policy is
π∗(aT |sT ) = 1 Z exp( N∑ i=1 ri(s T , aTi , a T Ni)).
Obviously, it satisfies the form π∗(aT |sT ) = 1Z exp( ∑N i=1 ψi(s T , aTi , a T Ni)).
Now suppose from step t+ 1 to T , we have
π∗(at ′ |st ′ ) =
1 Z exp( N∑ i=1 ψi(s t′ , at ′ i , a t′ Ni)) (8)
for t′ = t+ 1, ..., T .
Recall that we have the result
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t Ni)− T∑ t′=t+1 log π∗(at ′ |st ′ )] ) .
(9)
Now plug equation 8 into equation 9, we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 N∑ i=1 ψi(s t′ i , a t′ i , a t′ Ni) + C] ) ,
(10)
where C is some constant related to the normalization term. Thus, we redefine a new term
ψ̃i(s t, at, atNi) = Ep(st+1:T ,at+1:T |st,at) [ T∑ t=t′ ( ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 ψi(s t′ , at ′ , at ′ Ni) )] . (11)
Then obviously π∗(at|st) satisfies the form what we need by absorbing the constant C into the normalization term . Thus we have the result.
I.2 DERIVATION OF THE ALGORITHM
We start the derivation with minimization of the KL divergence KL(p̂(τ)||p(τ)), where p(τ) = [p(s0) ∏T t=0 p(s t+1|st,at)] exp (∑T t=0 ∑N i=1 ri(s t, ati, a t Ni) ) , p̂(τ) =
p(s0) ∏T t=0 p(s t+1|st,at)π(at|st).
KL(p̂(τ)||p(τ)) =Eτ∼p̂(τ) T∑ t=0 ( N∑ i=1 ri(s t, ati, a i Ni)− log π(a t|st) )
= ∑ τ [p(s0) T∏ t=0 p(st+1|st,at)π(at|st)] T∑ t=0 ( N∑ i=1 ri(s t, ati, a t Ni)− log π(a t|st) ) .
(12)
Now we optimize KL divergence w.r.t π(·|st). Considering the constraint ∑ j π(j|st) = 1, we in-
troduce a Lagrangian multiplier λ( ∑|A| j=1 π(j|st) − 1) (Rigorously speaking, we need to consider another constraint that each element of π is larger than 0, but later we will see the optimal value satisfies this constraint automatically). Now we take gradient ofKL(p̂(τ)||p(τ))+λ( ∑|A| j=1 π(j|st)−1) w.r.t π(·|s), set it to zero, and obtain
log π∗(at|st) = Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )]− 1 + λ.
Therefore
π∗(at|st) ∝ exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) .
Since we know ∑ j π(j|st) = 1, thus we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (13)
For convenience, we define the soft V function and Q function as that in (Levine, 2018), and will show how to decompose them into Vi and Qi later.
V (st+1) := E [ T∑ t′=t+1 N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− log π(a t′ |st ′ )|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + Ep(st+1|st,at)[V (s t+1)]
(14)
Thus V (st) = Eπ[Q(st, at) − log π(at|st)]. The optimal policy π∗(at|st) = exp(Q(s t,at)∫
expQ(st,at)dat by
plugging the definition of Q into equation 13.
Remind in section 4.1, we have approximated the optimal joint policy by the mean field approximation ∏N i=1 qi(ai|s). We now plug this into the definition of equation 14 and consider the discount factor. Notice it is easy to incorporate the discount factor by defining a absorbing state where each transition have (1− γ) probability to go to that state. Thus we have
V (st+1) := E [ T∑ t′=t+1 ( N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− N∑ i=1 log qi(a t′ i |st ′ ))|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + γEp(st+1|st,at)[V (s t+1)].
(15)
Thus we can further decompose V and Q into Vi and Qi. We define Vi and Qi in the following way.
Vi(s t+1) = E[ T∑ t′=t+1 ( ri(s t′ , at ′ i , a t′ Ni)− log qi(a t′ i |st ′ ) ) |st+1],
Qi(s t, ati, a t Ni) = ri(s t, ati, a t Ni) + γEp(st+1|st,at)[Vi(s t+1)].
Obviously we have V = ∑N i=1 Vi and Q = ∑N i=1Qi.
For Vi, according to our definition, we obtain Vi(s
t) = Eat∼∏Ni=1 qi [ri(st, ati, atNi)− log qi(ati|st) + Ep(st+1|st,at)Vi(st+1)]. (16) Now we relate it to Qi, and have
Vi(s t) = Eat∼∏Ni=1 qi [Qi(sti, ati, atNi)−log qi(ati|st)] = E(ai,aNi)∼(qi,qNi )Qi(sti, ati, atNi)−Eai∼qi log qi(ati|st).
Thus it suggests that we should construct the loss function on Vi and Qi in the following way. In the following, we use parametric family (e.g. neural network) characterized by ηi and κi to approximate Vi and Qi respectively.
J(ηi) = Est∼D[ 1
2
( Vηi(s t)− E(ai,aNi)∼(qi,qNi )[Qκi(s t, ati, a t Ni)]− log qi(a t i|st) )2 ],
J(κi) = E(st,ati,aNt i
)∼D[ 1
2
( Qiκi(s t, ait, a t Ni)− Q̂(s t, ait, a t Ni) )2 ]. (17)
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(st+1|st,at)[Vηi(s t+1)].
Now we are ready to derive the update rule of the policy, i.e., the intention propagation network.
Remind the intention propagation network actually is a mean-field approximation of the joint-policy.
min p1,p2,...,pn KL( N∏ i=1 pi(ai|s)||π∗(a|s)).
It is the optimization over the function pi rather than certain parameters. We have proved that after M iteration of intention propagation, we have output the nearly optimal solution qi.
In the following, we will demonstrate how to update the parameter θ of the propagation network Λθ(a t|st), if we use neural network to approximate it. Again we minimize the KL divergence
min θ EstKL( N∏ i=1 qi,θ(a t i|st)||π∗(at|st))
Plug the π∗(at|st) = exp(Q(s t,at))∫
expQ(st,at)dat into the KL divergence. It is easy to see, it is equivalent to
the following the optimization problem by the definition of the KL divergence.
max θ Est [ Eat∼∏ qi,θ(ati|st)[ N∑ i=1 Qκi(s t, ati, a t Ni)− N∑ i=1 log qi,θ(a t i|st)] ] .
Thus we sample state from the replay buffer and have the loss of the policy as
J(θ) = Est∼D,at∼∏Ni=1 qi,θ(ati|st)[ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)]. | 1. What is the main contribution of the paper in multi-agent reinforcement learning?
2. What are the strengths and weaknesses of the proposed multi-hop communication method?
3. How does the reviewer assess the novelty and similarity of the proposed method compared to prior works?
4. What are the concerns regarding the method's ability to handle complex multi-agent settings and partial observability?
5. Are there any questions or suggestions for improvement regarding the empirical results and experimental design? | Review | Review
The paper proposes a multi-hop communication method for multi-agent reinforcement learning. This method is based on the loosely coupled reward structures among agents, which, as far as I am concerned, are generally held in complex multi-agent settings. The authors use experiments on CityFlow, MPE, and MAgent to demonstrate that their method can outperform the SoTA methods and is scalable. The empirical results is impressive. However, it is some concerns regarding methods that lead to my overall negative rating.
Firstly, also the most importantly. Although the authors emphasize that they are communicating the intentions of agents, I think their method is quite similar to those communicating local observations, like NDQ (https://arxiv.org/abs/1910.05366), DGN, or CollaQ (https://arxiv.org/abs/2010.08531). One way to interpret the proposed communicating network structure is a normal multi-hop communication mechanism, but only with a softmax activation function.
Compared to previous works studying communications of local observations, the proposed work (1) needs to address the problems induced by the joint policy, like sampling from it. The author use a variational influence approach to conduct sampling. However, this approach may hurt the scalability. And it (2) requires agents have access to the global states. For partial observable environments, the proposed methods needs to reply on DGN.
Some other points: (1) I was expecting ablation studies where DGN is ablated on partial observable environments. (2) Some parts in the method section are hard to follow. |
ICLR | Title
Intention Propagation for Multi-agent Reinforcement Learning
Abstract
A hallmark of an AI agent is to mimic human beings to understand and interact with others. In this paper, we propose a collaborative multi-agent reinforcement learning algorithm to learn a joint policy through the interactions over agents. To make a joint decision over the group, each agent makes an initial decision and tells its policy to its neighbors. Then each agent modifies its own policy properly based on received messages and spreads out its plan. As this intention propagation procedure goes on, we prove that it converges to a mean-field approximation of the joint policy with the framework of neural embedded probabilistic inference. We evaluate our algorithm on several large scale challenging tasks and demonstrate that it outperforms previous state-of-the-arts.
1 INTRODUCTION
Collaborative multi-agent reinforcement learning is an important sub-field of the multi-agent reinforcement learning (MARL), where the agents learn to coordinate to achieve joint success. It has wide applications in traffic control (Kuyer et al., 2008), autonomous driving (Shalev-Shwartz et al., 2016) and smart grid (Yang et al., 2018). To learn a coordination, the interactions between agents are indispensable. For instance, humans can reason about other’s behaviors or know other peoples’ intentions through communication and then determine an effective coordination plan. However, how to design a mechanism of such interaction in a principled way and at the same time solve the large scale real-world applications is still a challenging problem.
Recently, there is a surge of interest in solving the collaborative MARL problem (Foerster et al., 2018; Qu et al., 2019; Lowe et al., 2017). Among them, joint policy approaches have demonstrated their superiority (Rashid et al., 2018; Sunehag et al., 2018; Oliehoek et al., 2016). A straightforward approach is to replace the action in the single-agent reinforcement learning by the joint action a = (a1, a2, ..., aN ), while it obviously suffers from the issue of the exponentially large action space. Thus several approaches have been proposed to factorize the joint action space to mitigate such issue, which can be roughly grouped into two categories:
• Factorization on policy. This approach explicitly assumes that π(a|s) := ∏N i=1 πi(ai|s), i.e.,
policies are independent (Foerster et al., 2018; Zhang et al., 2018). To mitigate for the instability issue caused by the independent learner, it generally needs a centralized critic. • Factorization on value function. This approach has a similar spirit but factorizes the joint value function into several utility functions, each just involving the actions of one agent (Rashid et al., 2018; Sunehag et al., 2018).
However, these two approaches lack of the interactions between agents, since in their algorithms agent i does not care about the plan of agent j. Indeed, they may suffer from a phenomenon called relative over-generalization in game theory observed by Wei & Luke (2016); Castellini et al. (2019); Palmer et al. (2018). Approaches based on the coordinate graph would effectively prevent such cases, where the value function is factorized as a summation of utility function on pairwise or local joint action (Guestrin et al., 2002; Böhmer et al., 2020). However, they only can be applied in discrete action, small scale game.
Furthermore, despite the empirical success of the aforementioned work in certain scenarios, it still lacks theoretical insight. In this work, we only make a simple yet realistic assumption: the reward function ri of each agent i just depends on its individual action and the actions of its neighbors (and
state), i.e., ri(s,a) = ri(s, ai, aNi), (1)
where we use Ni to denote the neighbors of agent i, s to denote the global state. It says the goal or decision of agent is explicitly influenced by a small subset Ni of other agents. Note that such an assumption is reasonable in lots of real scenarios. For instance,
• The traffic light at an intersection makes the decision on the phase changing mainly relying on the traffic flow around it and the policies of its neighboring traffic light. • The main goal of a defender in a soccer game is to tackle the opponent’s attacker, while he rarely needs to pay attention to opponent goalkeeper’s strategy.
Based on the assumption in equation 1, we propose a principled multi-agent reinforcement learning algorithm in the framework of probabilistic inference, where the objective is to maximize the long term reward of the group, i.e., ∑∞ t=0 ∑N i=1 γ trti ( see details in section 4).
Note since each agent’s reward depends on its neighbor, we still need a joint policy to maximize the global reward through interactions. In this paper, we derive an iterative procedure for such interaction to learn the joint policy in collaborative MARL and name it intention propagation. Particularly,
• In the first round, each agent i makes an independent decision and spreads out his plan µ̃i(we name it intention) to neighbors. • In the second round, agents i changes its initial intention properly based on its neighbors’ intention µ̃j , j ∈ Ni and propagates its intention µ̃i again. • In the third round, it changes the decision in the second round with a similar argument. • As this procedure goes on, we show the final output of agents’ policy converges to the mean field
approximation (the variational inference method from the probabilistic graphical model (Bishop, 2006)) of the joint policy.
In addition, this joint policy has the form of Markov Random Field induced by the locality of the reward function (proposition 1). Therefore, such a procedure is computationally efficient when the underlying graph is sparse, since in each round, each agent just needs to care about what its neighbors intend to do. Remark: (1) Our work is not related to the mean-field game (MFG) (Yang et al., 2018). The goal of the MFG is to find the Nash equilibrium, while our work aims to the optimal joint policy in the collaborative game. Furthermore, MFG generally assumes agents are identical and interchangeable. When the number of agents goes to infinity, MFG can view the state of other agents as a population state distribution. In our problem, we do not have such assumptions.
(2) our analysis is not limited to the mean-field approximation. When we change the message passing structure of intention propagation, we can show that it converges to other approximation of the joint policy, e.g., loopy belief propagation in variational inference (Yedidia et al., 2001) (see Appendix B.2 ).
Contributions: (1) We propose a principled method named intention propagation to solve the joint policy collaborative MARL problem; (2) Our method is computationally efficient, which can scale up to one thousand agents and thus meets the requirement of real applications; (3) Empirically, it outperforms state-of-the-art baselines with a wide margin when the number of agents is large; (4) Our work builds a bridge between MARL and neural embedded probabilistic inference, which would lead to new algorithms beyond intention propagation.
Notation: sti and ati represent the state and action of agent i at time step t. The neighbors of agent i are represented asNi. We denote X as a random variable with domain X and refer to instantiations of X by the lower case character x. We denote a density on X by p(x) and denote the space of all such densities as by P .
2 RELATED WORK
We first discuss the work of the factorized approaches on the joint policy. COMA designs a MARL algorithm based on the actor-critic framework with independent actors πi(ai|s), where the joint policy is factorized as π(a|s) = ∏N i=1 πi(ai|s) (Foerster et al., 2018). MADDPG considers a MARL with the cooperative or competitive setting, where it creates a critic for each agent (Lowe et al., 2017). Other similar works may include (de Witt et al., 2019; Wei et al., 2018). Another way is to factorize the value functions into several utility functions. Sunehag et al. (2018) assumes that the
overall Q function can be factorized as Q(s, a1, a2, .., aN ) = ∑N i=1Qi(si, ai) . QMIX extends this work to include a richer class of function, where it assumes the overall Q function is a monotonic function w.r.t. each Qi(si, ai) (Rashid et al., 2018). Similarly, Son et al. (2019) further relax the structure constraint on the joint value function. However these factorized methods suffer from the relative overgeneralization issue (Castellini et al., 2019; Palmer et al., 2018). Generally speaking, it pushes the agents to underestimate a certain action because of the low rewards they receive, while they could get a higher one by perfectly coordinating.
A middle ground between the (fully) joint policy and the factorized policy is the coordination graph (Guestrin et al., 2002), where the value function is factorized as a summation of the utility function on the pairwise action. Böhmer et al. (2020); Castellini et al. (2019) combine deep learning techniques with the coordination graph. It addresses the issue of relative overgeneralization, but still has two limitations especially in the large scale MARL problem. (1) The max-sum algorithm can just be implemented in the discrete action space since it needs a max-sum operation on the action of Q function. (2) Even in the discrete action case, each step of the Q learning has to do several loops of max-sum operation over the whole graph if there is a cycle in graph. Our algorithm can handle both discrete and continuous action space cases and alleviate the scalability issue by designing an intention propagation network.
Another category of MARL is to consider the communication among agents. The attention mechanism is used to decide when and who to communicate with (Das et al., 2018). Foerster et al. (2016) propose an end-to-end method to learn communication protocol. In (Liu et al., 2019; Chu et al., 2020), each agent sends the action information to it neighbors. In addition, Chu et al. (2020) require a strong assumption that the MDP has the spatial-temporal Markov property. However, they utilizes neighbor’s action information in a heuristic way and thus it is unclear what the agents are learning (e.g., do they learn the optimal joint policy to maximize the group reward? ). Jiang et al. (2020) propose DGN which uses GNN to spread the state embedding information to neighbors. However each agent still uses an independent Q learning to learn the policy and neglects other agents’ plans. In contrast, we propose a principled algorithm, where each agent makes decision considering other agents’ plan. Such procedure can be parameterized by GNN and other neural networks (see section 4.1 and appendix B.2). We prove its convergence to the solution of variational inference methods.
3 BACKGROUNDS
Probabilistic Reinforcement Learning: Probabilistic reinforcement learning (PRL) (Levine, 2018) is our building block. PRL defines the trajectory τ up to time step T as τ = [s0, a0, s1, a1, ..., sT , aT , sT+1]. The probability distribution of the trajectory τ induced by the optimal policy is defined as p(τ) = [p(s0) ∏T t=0 p(s t+1|st, at)] exp (∑T t=0 r(s t, at) ) . While the probability of the trajectory τ under the policy π(a|s) is defined as p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st, at)π(at|st). The objective is to minimize the KL divergence between p̂(τ) and p(τ). It is equivalent to the maximum entropy reinforcement learning
max π J(π) = T∑ t=0 E[r(st, at) +H(π(at|st))],
where it omits the discount factor γ and regularizer factor α of the entropy term, since it is easy to incorporate them into the transition and reward respectively. Notice in this framework the max operator in the Bellman optimality equation would be replaced by the softmax operator and thus its optimal policy is a softmax function related to the Q function (Haarnoja et al., 2017). Such framework subsumes state-of-the-art algorithms such as soft-actor-critic (SAC) (Haarnoja et al., 2018). In each iteration, SAC optimizes the following loss function of Q,π, V , and respectively.
E(st,at)∼D [ Q(st, at)− r(st, at)− γEst+1∼p[V (st+1)] ]2 ,Est∼DEat∼π[log π(at|st)−Q(st, at)]
Est∼D [ V (st)− Eat∼πθ [Q(st, at)− log π(at|st)] ]2 ,where D is the replay buffer.
Function Space Embedding of Distribution: In our work, we use the tool of embedding in Reproducing Kernel Hilbert Space (RKHS) to design an intention propagation procedure (Smola et al., 2007). We let φ(X) be an implicit feature mapping and X be a random variable with distribution p(x). Embeddings of p(x) is given by µX := EX [φ(X)] = ∫ φ(x)p(x)dx where the distribution is mapped to its expected feature map. By assuming that there exists a feature space such that
the embeddings are injective, we can treat the embedding µX of the density p(x) as a sufficient statistic of the density, i.e., any information we need from the density is preserved in µX (Smola et al., 2007). Such injective assumption generally holds under mild condition (Sriperumbudur et al., 2008). This property is important since we can reformulate a functional f : P → R of p(·) using the embedding only, i.e., f(p(x)) = f̃(µX). It also can be generalized to the operator case. In particular, applying an operator T : P → Rd to a density can be equivalently carried out using its embedding T ◦ p(x) = T̃ ◦ µX , where T̃ : F → Rd is the alternative operator working on the embedding. In practice, µX , f̃ and T̃ have complicated dependence on φ. As such, we approximate them by neural networks, which is known as the neural embedding approach of distribution (Dai et al., 2016).
4 OUR METHOD
In this section, we present our method intention propagation for the collaborative multi-agent reinforcement learning. To begin with, we formally define the problem as a networked MDP. The network is characterized by a graph G = (V, E), where each vertex i ∈ V represents an agent and the edge ij ∈ E means the communication link between agent i and j. We say i,j are neighbors if they are connected by this edge. The corresponding networked MDP is characterized by a tuple ({Si}Ni=1, {Ai}Ni=1, p, {ri}Ni=1, γ,G), where N is the number of agents, Si is the local state of the agent i andAi denotes the set of action available to agent i. We let S := ∏N i=1 Si andA := ∏N i=1Ai be the global state and joint action space respectively. At time step t+1, the global state st+1 ∈ S is drawn from the transition st+1 ∼ p(·|st,at), conditioned on the current state st and the joint action at = (at1, a t 2, ..., a t N ) ∈ A. Each transition yields a reward rti = ri(st,at) for agent i and γ is the discount factor. The aim of our algorithm is to learn a joint policy π(at|st) to maximize the overall long term reward (with an entropy termH(·|s) on the joint action a)
η(π) = E[ ∞∑ t=0 γt( N∑ i=1 rti +H(·|st))],
where each agent i can just observe its own state si and the message from the neighborhood communication. We denote the neighbors of agent i asNi and further assume that the reward ri depends on the state and the actions of itself and its neighbors, i.e., ri(s,a) := ri(s, ai, aNi). Such assumption is reasonable in many real scenarios as we discussed in the introduction. In the following, we start the derivation with the fully observation case, and discuss how to handle the partial observation later. The roadmap of the following derivation : At the beginning, we prove that the optimal policy has a Markov Random Field (MRF) form, which reduces the exponential large searching space to a polynomial one. However implement a MRF policy is not trivial in the RL setting (e.g., sample an action from the policy). Thus we sort to the varational inference method (focus on mean field approximation in the main paper and leave other methods in the appendix). But it would introduce complicated computations. At last we apply the kernel embedding method introduced in section 3 to solve this problem and learn the kernel embedding by neural networks. We also discuss how to handle the partially observable setting.
4.1 REDUCE POLICY SEARCHING SPACE
Recall that our aim is to maximize the long term reward with the entropy term. Therefore, we follow the definition of the optimal policy in the probabilistic reinforcement learning in (Levine, 2018) and obtain the proposition 1. It says under the assumption ri(s, a) = ri(s, ai, aNi), the optimal policy is in the form of Markov Random Field (MRF). We prove the following proposition in I.1.
Proposition 1 The optimal policy has the form π∗(at|st) = 1Z exp( ∑N i=1 ψi(s t, ati, a t Ni)), where
Z is the normalization term.
This proposition is important since it suggests that we should construct the policy π(at|st) with this form, e.g., a parametric family, to contain the optimal policy. If agent i and its neighbors compose a clique, the policy reduces to a MRF and ψ is the potential function. One common example is that the reward is a function on pairwise actions, i.e., r(s,a) = ∑ i∈V r(s, ai) + ∑ (i,j)∈E r(s, ai, aj). Then the policy has the form
π(a|s) = 1 Z exp( ∑ i∈V ψ̃i(s, ai) + ∑ (i,j)∈E ψ̃i,j(s, ai, aj)),
which is the pairwise MRF. For instance, in traffic lights control, we can define a 2-D grid network and the pairwise reward function. The MRF formulation on the policy effectively reduces the policy space comparing with the exponentially large one in the fully connected graph.
A straightforward way to leverage such observation is to define a πθ(at|st) as a MRF, and then apply the policy gradient algorithm, e.g., the following way in SAC. ∇θEst∼DEat∼πθ [log πθ(at|st) − Qκ(s
t,at)]. However it is still very hard to sample joint action at from πθ(at|st). In the next section, we resort to embedding to alleviate such problem.
Recall the remaining problem is how to sample the joint action from a MRF policy. Classical ways include the Markov Chain Monte Carlo method and variational inference. The former provides the guarantee of producing exact samples from the target density but computationally intensive. Therefore it is not applicable in the multi-agent RL setting, since we need to sample action once in each interaction with the environment. As such, we advocate the second approach. Here we use the mean-field approximation for the simplicity of presentation and defer more variational inference methods, e.g., loopy belief propagation, in Appendix B.2. We use an intention propagation network with the embedding of the distribution to represent the update rule of the mean field approximation.
Mean field approximation. We hope to approximate the π∗(a|s) by the mean-field variational family pi
min (p1,p2,...,pN ) KL( N∏ i=1 pi(ai|s)||π∗(a|s)),
where we omit the superscript t to simplify the notation. We denote the optimal solution of above problem as qi. Using the coordinate ascent variational inference,the optimal solution qi should satisfy the following fixed point equation (Bishop, 2006). Since the objective function is (generally) non-convex, such update converges to a local optimum (Blei et al., 2017).
qi(ai|s) ∝ exp ∫ ∏
j 6=i
qj(aj |s) log π∗(a|s)da. (2)
For simplicity of the representation, in the following discussion, we assume that the policy is a pairwise MRF but the methodology applies to more general case with more involved expression. Particularly, we assume π∗(a|s) = 1Z exp( ∑ i∈V ψi(s, ai) + ∑ (i,j)∈E ψij(s, ai, aj)). We plug this into equation 2 and obtain following fixed point equation.
log qi(ai|s) = ci + ψi(s, ai) + ∑ j∈Ni ∫ qj(aj |s)ψij(s, ai, aj)daj , (3)
where ci is some constant that does not depend on ai.
We can understand this mean-field update rule from the perspective of intention propagation. Equation 3 basically says each agent can not make the decision independently. Instead its policy qi should depend on the policies of others, particularly the neighbors in the equation. Clearly, if we can construct the intention propagation corresponding to equation 3, the final policy obtained from intention propagation will converge to the mean-field approximation of the joint policy. However we can not directly apply this update in our algorithm, since it includes a complicated integral. To this end , in the next section we resort to the embedding of the distribution qi (Smola et al., 2007) , which maps the distributions into a reproducing kernel Hilbert space.
Embed the update rule. Observe that the fixed point formulation equation 3 says that qi(ai|s) is a functional of neighborhood marginal distribution {qj(aj |s)}j∈Ni , i.e., qi(ai|s) = f(ai, s, {qj}j∈Ni). Denote the d-dimensinoal embedding of qj(aj |s) by µ̃j =∫ qj(aj |s)φ(aj |s)daj . Notice the form of feature φ is not fixed at the moment and will be learned implicitly by the neural network. Following the assumption that there exists a feature space such that the embeddings are injective in Section 3, we can replace the distribution by its embedding and have the fixed point formulation as
qi(ai|s) = f̃(ai, s, {µ̃j}j∈Ni). (4)
For more theoretical guarantee on the kernel embedding, e.g., convergence rate on the empirical mean of the kernel embedding, please refer to (Smola et al., 2007). Roughly speaking, once there
are enough data, we can believe the learned kernel embedding is close enough to the true kernel embedding. Therefore the update of equation 4 and equation 5 in the following would converge to the fixed point of equation 2. Remind that in section 3 at both sides we can do integration w.r.t. the feature map φ, which yields, µ̃i = ∫ qi(ai|s)φ(ai|s)dai = ∫ f̃(ai, s, {µ̃j}j∈Ni)φ(ai|s)dai. Thus we can rewrite it as a new operator on the embedding, which induces a fixed point equation again µ̃i = T̃ ◦ (s, {µ̃j}j∈Ni). In practice, we do this fix-point update with M iterations.
µ̃mi ← T̃ ◦ (s, {µ̃m−1j }j∈Ni) m = 1, ...,M. (5)
Finally, we output the distribution qi with qi(ai|s) = f̃(ai, s, {µ̃Mj }j∈Ni). In next section, we show how to represent these variables by neural network.
Parameterization by Neural Networks. In general f̃ and T̃ have complicated dependency on ψ and φ. Instead of learning such dependency, we directly approximate f̃ and T̃ by neural networks. For instance, we can represent the operator T̃ in equation 5 by µ̃i = σ(W1s+W2 ∑ j∈Ni µ̃j), where σ is a nonlinear activation function, W1 and W2 are some matrixes with row number equals to d. Interestingly, this is indeed a message passing form of Graph Neural Network (GNN) (Hamilton et al., 2017). Thus we can use M -hop (layer) GNN to represent the fixed-point update in equation 5. If the action space is discrete, the output qi(ai|s) is a softmax function. In this case f̃ is a fully connected layer with softmax output. When it is continuous, we can output a Gaussian distribution with the reparametrization trick (Kingma & Welling, 2019). We denote this intention propagation procedure as intention propagation network Λθ(a|s) with parameter θ in Figure 1(b). Figure 1(a) illustrates the graph and the message passing procedure. Agent 1 receives the embedding (intention) µ̃m−12 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ and spreads its new embedding µ̃m1 at the next iteration. Figure 1(b) gives the details on the parameterization of GNN. Here we use agent 1 as an example. To ease the exposition, we assume agent 1 just has one neighbor, agent 2. Each agent observes its own state si. After a MLP and softmax layer (we do not sample actions here, but just use the probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. Then agent 1 receives the embedding µ̃02 of its neighbor (agent 2). After a GNN layer to combine the information, e.g, µ̃ 1 1 = Relu[W1(s1 + s2) +W2(µ̃ 0 1 + µ̃ 0 2)](W1,W2 are shared across all agents as that in GNN), we obtain new embedding µ̃11 of agent 1. Notice we also do message passing on state, since in practice the global state is not available. In the second layer, we do similar things. We defer detailed discussion and extension to other neural networks to Appendix B due to space constraint.
4.2 ALGORITHM
We are ready to give the overall algorithm by combining all pieces together. All detailed derivation on Vi, Qi for agent i and the corresponding loss function will be given in the appendix I, due to the space constraint. Recall we have a mean-field approximation qi of the joint-policy, which is obtained by M iterations of intention propagation. We represent this procedure by a M-hop graph neural network with parameter θ discussed above. Notice that this factorization is different from the case π(a|s) = ∏N i=1 π(ai|s) in (Zhang et al., 2018; Foerster et al., 2018), since qi(ai|s) depends on the information of other agents’ plan. Using the mean field approximation qi, we can further decompose Q = ∑N i=1Qi and V = ∑N i=1 Vi, see appendix I. We use neural networks to approximate Vi and Qi function with parameter ηi and κi respectively. As that in TD3 (Fujimoto et al., 2018), for each agent i we have a target value network Vη̄i and two Qκi functions to mitigate the overestimation by training them simultaneously with the same data but only selecting minimum of them as the
target in the value update. In the following we denote qi(ai|s) as qi,θ(ai|s) to explicitly indicate its dependence on the intention propagation network Λθ. We use D to denote the replay buffer. The whole algorithm is presented in Algorithm 1.
Loss Functions. The loss of value function Vi:
J(ηi) = Est∼D[ 1
2
( Vηi(s
t)− E(ati,atNi )∼(qi,qNi )[Qκi(s t, ati, a t Ni)− log qi,θ(a t i|st)]
)2 ].
The loss of Qi: J(κi) = E(st,ati,atNi )∼D[ 1 2
( Qκi(s t, ati, a t Ni)− Q̂i(s t, ati, a t Ni) )2 ],
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(·|st,at)[Vη̄i(s t+1)].
The loss of policy: J(θ) = Est∼D,at∼∏Ni=1 qi [ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)].
It is interesting to compare the loss with the counterpart in the single agent SAC in section 3.
• qi,θ(ai|s) is the output of intention propagation network Λθ(a|s) parameterized by a graph neural network. Thus it depends on the policy of other agents. • Qκi depends on the action of itself and its neighbors, which can also be accomplished by the graph neural network in practice.
Algorithm 1 Intention Propagation Inputs: Replay buffer D. Vi, Qi for each agent i. Intention propagation network Λθ(at|s) with outputs {qi,θ}Ni=1. Learning rate lη, lκ,lθ. Moving average parameter τ for the target network for each iteration do
for each environment step do sample at ∼ ∏ qi,θ(a t i|st) from the intention propagation network. st+1 ∼ p(st+1|st,at),
D ← D ⋃(
sti, a t i, r t i , s t+1 i )N i=1
end for for each gradient step do
update ηi, κi, θ, η̄i. ηi ← ηi − lη∇J(ηi), κi ← κi − lκ∇J(κi) θ ← θ − lθ∇J(θ), η̄i ← τηi + (1− τ)η̄i
end for end for
Handle the Partial Observation: So far, we assume that agents can observe global state while in practice, each agent just observes its own state si. Thus besides the communication with the intention propagation, we also do the message passing on the state embedding with the graph neural network. The idea of this local state sharing is similar to (Jiang et al., 2020), while the whole structure of our work is quite different from (Jiang et al., 2020). See the discussion in the related work.
5 EXPERIMENT
In this section, we evaluate our method and eight state-of-the-art baselines on more than ten different scenarios from three popular MARL platforms: (1) CityFlow, a traffic signal control environment
(Tang et al., 2019). It is an advanced version of SUMO (Lopez et al., 2018) widely used in MARL community. (2) multiple particle environment (MPE) (Mordatch & Abbeel, 2017) and (3) grid-world platform MAgent (Zheng et al., 2018). Our intention propagation (IP) empirically outperforms all baselines on all scenarios especially on the large scale problem.
5.1 SETTINGS
We give a brief introduction to the settings of the experiment and defer the details such as hyperparameter tuning of intention propagation and baselines to appendix D. Notice all algorithms are tested in the partially observable setting, i.e., each agent just can observe its own state si.
In traffic signal control problem (Left panel in Figure 2), each traffic light at the intersection is an agent. The goal is to learn policies of traffic lights to reduce the average waiting time to alleviate the traffic jam. Graph for cityflow: graph is a 2-D grid induced by the map (e.g. Figure 2). The roads are the edges which connects the agents. We can define the cost −ri as the traveling time of vehicle around the intersection i, thus the total cost indicates the average traveling time. Obviously, ri has a close relationship with the action of neighbors of agent i but has little dependence on the traffic lights far away. Therefore our assumption on reward function holds. We evaluate different methods on both real-world and synthetic traffic data under the different numbers of intersections.
MPE (Mordatch & Abbeel, 2017) and MAgent (Zheng et al., 2018) (Figure 2) are popular particle environments on MARL (Lowe et al., 2017; Jiang et al., 2020). Graph for particle environments : for each agent, it has connections (i.e., the edge of the graph) with k nearest neighbors. Since the graph is dynamic, we update the adjacency matrix of the graph every n step, e.g., n = 5 steps. It is just a small overhead comparing with the training of the neural networks. The reward functions also have local property, since they are explicitly or implicitly affected by the distance between agents. For instance, in heterogeneous navigation, if small agents collide with big agents, they will obtain a large negative reward. Thus their reward depends on the action of the nearby agents. Similarly, in the jungle environment, agent can attack the agents nearby to obtain a high reward.
Baselines. We compare our method against eight different baselines mentioned in introduction and related work section: QMIX (Rashid et al., 2018); MADDPG (Lowe et al., 2017); permutation invariant critic (PIC) (Liu et al., 2019); graph convolutional reinforcement learning (DGN) (Jiang et al., 2020); independent Q-learning (IQL) (Tan, 1993); permutation invariant MADDPG with data shuffling mechanism (MADDPGS); COMA (Foerster et al., 2018); MFQ (Yang et al., 2018). These baselines are reported as the leading algorithm of solving tasks in CityFlow, MPE and MAgent. Among them, DGN and MFQ need the communication with neighbors in the training and execution. Also notice that PIC assumes the actor can observe the global state. Thus in the partially observable setting, each agent in PIC also needs to communicate to get the global state information in the training and the execution. Further details on baselines are given in appendix E.1.
Neural Network and Parameters. Recall the intention propagation network is represented by GNN. In our experiment, our graph neural network has hop = 2 (2 GNN layers, i.e., M = 2) and 1 fully-connected layer at the top. Each layer contains 128 hidden units. Other hyperparameters are listed in appendix H.
5.2 COMPARISON TO STATE-OF-THE-ART
In this section, we compare intention propagation (IP) with other baselines. The experiments are evaluated by average episode reward (Lowe et al., 2017). For CityFlow tasks, average reward refers
to negative average travel time. All experiments are repeated for 5 runs with different random seeds. We report the mean and standard deviation in the curves. We report the results on six experiments and defer all the others to appendix G due to the limit of space.
CityFlow. We first evaluate our algorithm on traffic control problem. Particularly, we increase the number of intersections (agents) gradually to increase the difficulties of the tasks. Figure 3 presents the performance of different methods on both real-world and synthetic CityFlow data with different number of intersections. On the task of Manhattan City, intention propagation (IP) method, the baseline methods PIC and DGN achieve better reward than the other methods while our method approaches higher reward within fewer steps. On the larger task (N=100), both PIC and DGN have large variance and obtain poor performance. The experiment with N=1225 agents is an extremely challenging task. Our algorithm outperforms all baselines with a wide margin. The runner-up is MADDPG with data shuffling mechanism. Its final performance is around −4646 and suffers from large variance. In contrast, the performance of our method is around −569 (much higher than the baselines). It’s clear that, in both real-world and synthetic cityflow scenarios, the proposed IP method obtains the best performance. We defer further experimental results to appendix G.
MPE and MAgent. Figure 4 demonstrates the performance of different methods on other three representative scenario instances: a small task cooperative navigation (N=30) and two large-scale tasks heterogeneous navigation (N=100) and prey and predator (N=100). We run all algorithms long enough (more than 1e6 steps). In all experiments, our algorithm performs best. For cooperative navigation, MADDPGS performs better than MADDPG. The potential improvement comes from data-shuffling mechanism, which makes MADDPGS more robust to handle the manually specified order of agents. QMIX performs much better than MADDPG, MADDPGS and IQL. However, its performance is not stable even on small setting (N=30). DGN is better and more stable than QMIX. However, when solving large-scale settings, its performance is much worse than PIC and our intention propagation (IP). Although PIC can solve large-scale tasks, our IP method is still much better. In prey and predator, there are two groups of agents: good agents and adversaries. To make a fair comparison of rewards of different methods, we fix good agents’ policies and use all the methods to learn the adversaries’ policies. Such setting is commonly used in many articles (Lowe et al., 2017; Liu et al., 2019).
Stability. Stability is a key criterion to evaluate MARL. In all experiments, our method is quite stable with small variance. For instance, as shown in Figure 3 (b), DGN approaches −1210± 419 on the CityFlow scenario with N=100 intersections while our method approaches −465 ± 20 after 1.6× 106 steps (much better and stable). The reason is that to make the joint decision, the agent in our algorithm can adjust its own policy properly by considering other agents’ plans.
Ablation Study: We conduct a set of ablation studies related to the effect of joint policy, graph, hop size, number of neighbors and the assumption of the reward function. Particularly, we find the joint policy is essential for the good performance. In Cityflow, the performance of traffic graph (2-d grid induced by the roadmap) is better than the fully connected graph. In MPE and MAgent, We define the adjacent matrix based on the k nearest neighbors and pick k = 8 in large scale problem and k = 4 in small scale problem. In all of our experiment, we choose the 2-hop GNN. Because of the limitation of space, we just summarize our conclusion here and place the details in appendix F.
A ORGANIZATION OF THE APPENDIX
In appendix B, we give the details on the intention propagation network and parameterization of the GNN. We explain intention propgation from the view of the MARL. At last, we extend the intention propagation to other approximations which converges to other solutions of the variational inference. Notice such extension on the algorithm can also be easily parameterized by neural networks.
In Appendix C, we give the details of the algorithm deferred from the main paper. Appendix D summarizes the configuration of the experiment and MARL environment. Appendix E gives more details on baselines and the hyperparameters of GNN used in our model. Appendix F conducts the ablation study deferred from the main paper. Appendix G and H give more experimental results and hyperparameters used in the algorithms. At appendix I, we derive the algorithm and prove the proposition 1.
B INTENTION PROPAGATION NETWORK
B.1 DETAILS ON THE INTENTION PROPAGATION NETWORK
In this section, we give the details on the intention propagation network deferred from the main paper. We first illustrate the message passing of the intention propagation derived in section 4.1. Then we give a details on how to construct graph neural network.
Message passing and explanation from the view of MARL: µ̃i is the embedding of policy of agent i, which represents the intention of the agent i. At 0 iteration, every agent makes independent decision. The policy of agent i is mapped into its embedding µ̃0i . We call it the intention of agent i at iteration 0. Then agent i sends its plan to its neighbors . In Figure 5, µ̃mi is the d dimensional (d = 3 in this figure) embedding of qi at m−th iteration of intention propagation. We draw the update of µ̃(m)1 as example. Agent 1 receives the embedding (intention) µ̃ m−1 2 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ . After M iterations, we obtain µ̃M1 and output the policy distribution q1 using equation 4. Similar procedure holds for other agents. At each RL step t, we do this procedure (with M iterations) once to generate joint policy. M in general is small, e.g., M = 2 or 3. Thus it is efficient.
Parameterization on GNN: We then illustrate the parameterization of graph neural network in Figure 6. If the action space is discrete, the output qi(ai|s) is a softmax function. When it is continuous, we can output a Gaussian distribution (mean and variance) with the reparametrization trick (Kingma & Welling, 2019). Here, we draw 2-hop (layer) GNN to parameterize it in discrete action intention propagation. In Figure 6 (b), each agent observe its own state si. After a MLP and softmax layer (we do not sample here, and just use the output probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. In the following, we use agent 1 as an example. To ease the exposition, we assume Agent 1 just has one neighbor, agent 2. Agent 1 receives the embedding µ̃02 of its neighbor. After a GNN layer to combine the information, e.g, Relu[W1(s1 + s2) + W2(µ̃ 0 1 + µ̃ 0 2)], we obtain new embedding µ̃ 1 1 of agent 1. Notice we also do
message passing on state, since in practice the global state is not available. In the second layer, we do similar things. Agent 1 receives the embedding information of µ̃12 from its neighbors and get a new embedding µ̃21. Then this embedding passes a MLP+softmax layer and output probability of action, i.e. q1(a1|s).
B.2 EXTENSION TO OTHER VARIATIONAL INFERENCE METHODS AND NEURAL NETWORKS
In this section, we show how to approximate the joint policy with the Loopy Belief Propagation in the variational inference (Yedidia et al., 2001). This will lead to a new form of neural networks beyond vanilla GNN that we illustrate above.
The objective function in Loop Belief Propagation is the Beth Free energy (Yedidia et al., 2001). Different from the mean-field approximation, it introduces another variational variable qij , which brings more flexibility on the approximation. The following is objective function in our case.
min qi,qij∈E − ∑ i (|Ni| − 1) ∫ qi(ai|s) log qi(ai|s) ψi(s, ai) dai
+ ∑ ij ∫ qij(ai, aj |s) log qij(ai, aj |s) ψij(s, ai, aj)ψi(s, ai)ψj(s, aj) daidaj .
s.t. ∫ qij(ai, aj |s)daj = qi(aj |s), ∫ qij(ai, aj |s)dai = qj(aj |s)
(6)
Solve above problem, we have the fixed point algorithm mij(aj |s)← ∫ ∏
k∈Ni\j
mki(ai|s)ψi(s, ai)ψij(s, ai, aj)dai,
qi(ai|s)← ψi(s, ai) ∏ j∈Ni mji(ai|s).
Similar to the mean-field approximation case, we have mij(aj |s) = f(aj , s, {mki}k∈Ni\j), qi(ai|s) = g(ai, s, {mki}k∈Ni),
It says the message mij and marginals qi are functionals of messages from neighbors. Denote the embedding ν̃ij = ∫ ψj(s, aj)mij(aj |s)daj and µ̃i = ∫ ψi(s, ai)qi(ai|s)dai, we have
ν̃ij = T̃ ◦ ( s, {ν̃ki}k∈Ni\j ) , µ̃i = T̃ ◦ ( s, {ν̃ki}k∈Ni ) .
Again, we can parameterize above equation by (graph) neural network ν̃ij = σ ( W1s +
W2 ∑ k∈Ni\j ν̃ki ) , µ̃i = σ ( W3s+W4 ∑ k∈Ni ν̃ki ) .
Following similar way, we can derive different intention propagation algorithms by changing different objective function which corresponds to e.g., double-loop belief propagation(Yuille, 2002), tree-reweighted belief propagation (Wainwright et al., 2003) and many others.
C ALGORITHM
We present some remarks of the algorithm Intention Propagation (algorithm 1) deferred from the main paper.
Remark: To calculate the loss function J(ηi), each agent need to sample the global state and (ai, aNi). Thus we first sample a global state from the replay buffer and then sample all action a once using the intention propagation network.
D FURTHER DETAILS ABOUT ENVIRONMENTS AND EXPERIMETAL SETTING
Table 1 summarizes the setting of the tasks in our experiment.
D.1 CITYFLOW
CityFlow (Tang et al., 2019) is an open-source MARL environment for large-scale city traffic signal control 1. After the traffic road map and flow data being fed into the simulators, each vehicle moves from its origin location to the destination. The traffic data contains bidirectional and dynamic flows with turning traffic. We evaluate different methods on both real-world and synthetic traffic data. For real-world data, we select traffic flow data from Gudang sub-district, Hangzhou, China and Manhattan, USA 2. For synthetic data, we simulate several different road networks: 7 × 7 grid network (N = 49) and large-scale grid networks with N = 10 × 10 = 100 , 15 × 15 = 225, 35 × 35 = 1225. Each traffic light at the intersection is the agent. In the real-world setting (Hang Zhou, Manhattan), the graph is a 2-d grid induced by the roadmap. Particularly, the roads are edges which connect the node (agent) of the graph. For the synthetic data, the map is a n ∗ n 2-d grid (Something like Figure 7), where edges represents road, node is the traffic light. We present the experimental results deferred from the main paper in Figure 10.
D.2 MPE
In MPE (Mordatch & Abbeel, 2017) 3, the observation of each agent contains relative location and velocity of neighboring agents and landmarks. The number of visible neighbors in an agent’s observation is equal to or less than 10. In some scenarios, the observation may contain relative location and velocity of neighboring agents and landmarks.
1https://github.com/cityflow-project/CityFlow 2We download the maps from https://github.com/traffic-signal-control/ sample-code. 3To make the environment more computation-efficient, Liu et al. (2019) provided an improved version of MPE. The code are released in https://github.com/IouJenLiu/PIC.
We consider four scenarios in MPE. (1) cooperative navigation: N agents work together and move to cover L landmarks. If these agents get closer to landmarks, they will obtain a larger reward. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and N agents. The observation dimension is 26. (2) prey and predator: N slower cooperating agents must chase the faster adversaries around a randomly generated environment with L large landmarks. Note that, the landmarks impede the way of all agents and adversaries. This property makes the scenario much more challenging. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and 5 preys. The observation dimension is 34. (3) cooperative push: N cooperating agents are rewarded to push a large ball to a landmark. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 28. (4) heterogeneous navigation: this scenario is similar with cooperative navigation except dividing N agents into N2 big and slow agents and N 2 small and fast agents. If small agents collide with big agents, they will obtain a large negative reward. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 26.
Further details about this environment can be found at https://github.com/IouJenLiu/ PIC.
D.3 MAGENT
MAgent (Zheng et al., 2018) is a grid-world platform and serves another popular environment platform for evaluating MARL algorithms. Jiang et al. (2020) tested their method on two scenarios: jungle and battle. In jungle, there are N agents and F foods. The agents are rewarded by positive reward if they eat food, but gets higher reward if they attack other agents. This is an interesting scenario, which is called by moral dilemma. In battle, N agents learn to fight against several enemies, which is very similar with the prey and predator scenario in MPE. In our experiment, we evaluate our methods on jungle.
In our experiment, the size for the grid-world environment is 30 × 30. Each agent refers to one grid and can observe 11 × 11 grids centered at the agent and its own coordinates. The actions includes moving and attacking along the coordinates. Further details about this environment can be found at https://github.com/geek-ai/MAgent and https://github.com/ PKU-AI-Edge/DGN.
E FURTHER DETAILS ON SETTINGS
E.1 DESCRIPTION OF OUR BASELINES
We compare our method with multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), a strong actor-critic algorithm based on the framework of centralized training with decentralized execution; QMIX (Rashid et al., 2018), a q-learning based monotonic value function factorisation algorithm; permutation invariant critic (PIC) (Liu et al., 2019), a leading algorithm on MPE yielding identical output irrespective of the agent permutation; graph convolutional reinforcement learning (DGN) (Jiang et al., 2020), a deep q-learning algorithm based on deep convolutional graph neural network with multi-head attention, which is a leading algorithm on MAgent; independent Q-learning (IQL) (Tan, 1993), decomposing a multi-agent problem into a collection of simultaneous single-agent problems that share the same environment, which usually serves as a surprisingly strong benchmark in the mixed and competitive games (Tampuu et al., 2017). In homogeneous settings, the input to the centralized critic in MADDPG is the concatenation of all agent’s observations and actions along the specified agent order, which doesn’t hold the property of permutation invariance. We follow the similar setting in (Liu et al., 2019) and shuffle the agents’ observations and actions in training batch 4. In COMA (Foerster et al., 2018), it directly assume the poilcy is factorized. It calculates the counterfactual baseline to address the credit assignment problem in MARL. In our experiment, since we can observe each reward function, each agent can directly approximate the Q function without counterfactual baseline. MFQ derives the algorithm from the view of mean-field game(Yang et al., 2018). Notice the aim of mean-field game is to find the Nash equilibrium rather
4This operation doesn’t change the state of the actions.
than maxmization of the total reward of the group. Further more, it needs the assumption that agents are identical.
E.2 NEURAL NETWORKS ARCHITECTURE
To learn feature from structural graph build by the space distance for different agents, we design our graph neural network based on the idea of a strong graph embedding tool structure2vec (Dai et al., 2016), which is an effective and scalable approach for structured data representation through embedding latent variable models into feature spaces. Structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. After using M graph neural network layers, each node can receive the information fromM -hops neighbors by message passing. Recently, attention mechanism empirically leads to more powerful representation on graph data (Veličković et al., 2017; Jiang et al., 2020). We employ this idea into our graph neural network. In some settings, such as heterogeneous navigation scenario from MPE, the observations of different group of agents are heterogeneous. To handle this issue, we use different nonlinear functions to extract the features from heterogeneous observations and map the observations into a latent layer, then use the same graph neural networks to learn the policy for all types of agents. In our experiment, our graph neural network has M = 2 layers and 1 fully-connected layer at the top. Each layer contains 128 hidden units.
F ABLATION STUDIES
F.1 INDEPENDENT POLICY VS INTENTION PROPAGATION.
We first give a toy example where the independent policy (without communication) fails. To implement such algorithm, we just replace the intention propagation network by a independent policy network and remain other parts the same. Think about a 3× 3 2d-grid in Figure 7 where the global state (can be observed by all agents) is a constant scalar (thus no information). Each agent chooses an action ai = 0 or 1. The aim is to maximize a reward−(a1−a2)2−(a1−a4)2−(a2−a3)2− ...− (a8−a9)2, (i.e., summation of the reward function on edges). Obviously the optimal value is 0. The optimal policy for agents is a1 = a2 =, ..., a9 = 0 or a1 = a2 =, ..., a9 = 1. However independent policy fails, since each agents does not know how its allies pick the action. Thus the learned policy is random. We show the result of this toy example in Figure 7, where intention propagation learns optimal policy.
F.2 GRAPH TYPES, NUMBER OF NEIGHBORS, AND HOP SIZE
We conduct a set of ablation studies related to graph types, the number of neighbors, and hop size. Figure 8(a) and Figure 8(b) demonstrate the performance of our method on traffic graph and fullyconnected graph on the scenarios (N=49 and N=100) of CityFlow. In the experiment, each agent can only get the information from its neighbors through message passing (state embedding and the policy embedding). The result makes sense, since the traffic graph represents the structure of the
map. Although the agent in the fully connected graph would obtain global information, it may introduce irrelevant information from agents far away.
Figure 8(c) and Figure 8(d) demonstrate the performance under different number of neighbors and hop size on cooperative navigation (N=30) respectively. The algorithm with neighbors=8 has the best performance. Again the the fully connected graph (neighbors=30) may introduce the irrelevant information of the agents far away. Thus its performance is worse than the algorithm with graph constructed by the K-nearest neighbor. In addition the fully connected graph introduces more computations in the training. In Figure 8(d), we increase the hop-size from 1 to 3. The performance of IP with hop=2 is much better than that with hop=1. While IP with hop=3 is just slightly better than that with hop=2. It means graph neural network with hop size =2 has aggregated enough information.
In Figure 8(e), we test the importance of the k-nearest neighbor structure. IP(neighbors=3)+random means that we pick 3 agents uniformly at random as the neighbors. Obviously, IP with K-nearest neighbors outperforms the IP with random graph a lot. In Figure 8(f), we update adjacency matrix every 1, 5, 10 steps. IP(neighbors=8) denotes that we update the adjacency matrix every step, while IP(neighbors=8)+reset(5) and IP(neighbors=8)+reset(10) denote that we update adjacency matrix every 5 and 10 steps respectively. Obviously, IP(neighbors=8) has the best result. IP(neighbors=8)+reset(5) is better than IP(neighbors=8)+reset(10). The result makes sense, since the adjacency matrix is more accurate if the update interval is smaller.
F.3 ASSUMPTION VIOLATION
The aforementioned experimental evaluations are based on the mild assumption: the actions of agents that are far away would not affect the learner because of their physical distance. It would be interesting to see the performance where the assumption is violated. As such, we modify the reward in the experiment of cooperative navigation. In particular, the reward is defined by r = r1 + r2, where r1 encourages the agents to cover (get close to) landmarks and r2 is the log function of the distances between agents (farther agents have larger impact). To make a violation, we let r2 dominate the reward. We conduct the experiments with hop = 1, 2, 3. Figure 9 shows that the rewards obtained by our methods are 4115 ± 21, 4564 ± 22, and 4586 ± 25 respectively. It’s expected in this scenario, since we should use large hop to collect information from the far-away agents.
G FURTHER EXPERIMENTAL RESULTS
For most of the experiments, we run them long enough with 1 million to 1.5 million steps and stop (even in some cases our algorithm does not converge to the asymptotic result), since every experment in MARL may cost several days. We present the results on Cityflow in Figure 10. Figure 11 provides the experimental results on the cooperative navigation instances with N = 15, N = 30 and N = 200 agents. Note that, the instance with N = 200 is a large-scale and challenging multiagents reinforcement learning setting (Chen et al., 2018; Liu et al., 2019), which typically needs several days to run millions of steps. It’s clear that IQL, MADDPG, MADDPG perform well in the small setting (N=15), however, they failed in large-scale instances (N = 30 and N = 200). In the instance withN = 30, MADDPGS performs better than MADDPG. The potential reason is that with the help of shuffling, MADDPGS is more robust to handle the manually specified order of agents. Although QMIX performs well in the instance of N = 15 and N = 30, it has large variances in both settings. DGN using graph convolutional network can hold the property of permutation invariance, it obtains much better performance than QMIX on these two settings. However, it also fails to solve the large-scale settings with N = 200 agents. Empirically, after 1.5 × 106 steps, PIC obtains a large reward (−425085 ± 31259) on this large-scale setting. Despite all these, the proposed intention propagation (IP) approaches −329229 ± 14730 and is much better than PIC. Furthermore, Figure 11 shows the results of different methods on (d) jungle (N=20, F=12) and (e) prey and predator (N=100). The experimental results shows our method can beats all baselines on these two tasks. On the scenario of cooperative push (N=100) as shown in Figure 11(f), it’s clear that DGN, QMIX, IQL, MADDPG and MADDPGS all fail to converge to good rewards after 1.5× 106 environmental steps. In contrast, PIC and the proposed IP method obtain much better rewards than these baselines. Limited by the computational resources, we only show the long-term performance of the best two methods. Figure 11(f) shows that IP is slightly better than PIC in this setting.
G.1 POLICY INTERPRETATION
Explicitly analyzing the policy learned by deep multi-agent reinforcement learning algorithm is a challenging task, especially for the large-scale problem. We follow the similar ideas from (Zheng et al., 2019) and analyze the learned policy on CityFlow in the following way: We select the same period of environmental steps within [210000, 1600000] and group these steps into 69 intervals (each interval contains about 20000 steps). We compute the ratio of vehicle volume on each movement and the sampled action volume from the learned policy (each movement can be assigned to one action according to the internal function in CityFlow). We define the ratio of vehicle volume over all movements as the vehicle volume distribution and define the ratio of the sampled action volume from the learned policy over all movements as the sampled action distribution. It’s expected that a good MARL algorithm will hold the property: these two distributions will very similar over a period of time. Figure 12 reports their KL divergence by intervals. It’s clear that the proposed intention propagation method (IP) obtains the lowest KL divergence (much better than the state-of-the-art baselines). Because KL divergence is not symmetrical metric, we also calculate their Euclidean distances. Specifically, the distance of our method is 0.0271 while DGN is 0.0938 and PIC is 0.0933.
H HYPERPARAMETERS
The parameter on the environment. For the max episode length, we follow the similar settings like that in the baselines (Lowe et al., 2017) . Particularly, we set 25 for MPE and set 100 for CityFlow. For MAgent, we find that setting the max episode length by 25 is better than 100. All the methods share the same setting.
We list the range of hyperparameter that we tune in all baselines and intention propagation. γ : {0.95, 0.98, 0.99, 0.999}, learning rate : {1, 5, 10, 100}×1e-4. activation function: {relu, gelu, tanh}, batch size:{128, 256, 512, 1024}, gradient steps: {1, 2, 4, 8}. Number of hidden units in MLP: {32, 64, 128, 256, 512}, number of layers in MLP:{1, 2, 3} in all experiment. In Qmix, GRU hidden unites are {64, 128}. A fully connected layer is before and after GRU. Hypernetwork and mixing network are both single layer network(64 hidden units with Relu activation from the Qmix paper). The parameter of intention propagation is reported in Table.2.
I DERIVATION
I.1 PROOF OF PROPOSITION 1
We prove the result by induction using the backward view. To see that, plug r(st,at) = ∑N i=1 ri(s t, ati, a t Ni) into the distribution of the optimal policy defined in section 3.
p(τ) = [p(s0) T∏ t=0 p(st+1|st,at)] exp T∑ t=0 N∑ i=1 ri(s t, ati, a t Ni)
Recall the goal is to find the best approximation of π(at|st) such that the trajectory distribution p̂(τ) induced by this policy can match the optimal trajectory probability p(τ). Thus we minimize the KL divergence between them minπDKL(p̂(τ)||p(τ)), where p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st,at)π(at|st). We can do optimization w.r.t. π(at|st) as that in (Levine, 2018) and obtain a backward algorithm on the policy π∗(at|st) (See equation 13 in I.2.)
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (7)
Using the result equation 7, when t = T , the optimal policy is
π∗(aT |sT ) = 1 Z exp( N∑ i=1 ri(s T , aTi , a T Ni)).
Obviously, it satisfies the form π∗(aT |sT ) = 1Z exp( ∑N i=1 ψi(s T , aTi , a T Ni)).
Now suppose from step t+ 1 to T , we have
π∗(at ′ |st ′ ) =
1 Z exp( N∑ i=1 ψi(s t′ , at ′ i , a t′ Ni)) (8)
for t′ = t+ 1, ..., T .
Recall that we have the result
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t Ni)− T∑ t′=t+1 log π∗(at ′ |st ′ )] ) .
(9)
Now plug equation 8 into equation 9, we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 N∑ i=1 ψi(s t′ i , a t′ i , a t′ Ni) + C] ) ,
(10)
where C is some constant related to the normalization term. Thus, we redefine a new term
ψ̃i(s t, at, atNi) = Ep(st+1:T ,at+1:T |st,at) [ T∑ t=t′ ( ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 ψi(s t′ , at ′ , at ′ Ni) )] . (11)
Then obviously π∗(at|st) satisfies the form what we need by absorbing the constant C into the normalization term . Thus we have the result.
I.2 DERIVATION OF THE ALGORITHM
We start the derivation with minimization of the KL divergence KL(p̂(τ)||p(τ)), where p(τ) = [p(s0) ∏T t=0 p(s t+1|st,at)] exp (∑T t=0 ∑N i=1 ri(s t, ati, a t Ni) ) , p̂(τ) =
p(s0) ∏T t=0 p(s t+1|st,at)π(at|st).
KL(p̂(τ)||p(τ)) =Eτ∼p̂(τ) T∑ t=0 ( N∑ i=1 ri(s t, ati, a i Ni)− log π(a t|st) )
= ∑ τ [p(s0) T∏ t=0 p(st+1|st,at)π(at|st)] T∑ t=0 ( N∑ i=1 ri(s t, ati, a t Ni)− log π(a t|st) ) .
(12)
Now we optimize KL divergence w.r.t π(·|st). Considering the constraint ∑ j π(j|st) = 1, we in-
troduce a Lagrangian multiplier λ( ∑|A| j=1 π(j|st) − 1) (Rigorously speaking, we need to consider another constraint that each element of π is larger than 0, but later we will see the optimal value satisfies this constraint automatically). Now we take gradient ofKL(p̂(τ)||p(τ))+λ( ∑|A| j=1 π(j|st)−1) w.r.t π(·|s), set it to zero, and obtain
log π∗(at|st) = Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )]− 1 + λ.
Therefore
π∗(at|st) ∝ exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) .
Since we know ∑ j π(j|st) = 1, thus we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (13)
For convenience, we define the soft V function and Q function as that in (Levine, 2018), and will show how to decompose them into Vi and Qi later.
V (st+1) := E [ T∑ t′=t+1 N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− log π(a t′ |st ′ )|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + Ep(st+1|st,at)[V (s t+1)]
(14)
Thus V (st) = Eπ[Q(st, at) − log π(at|st)]. The optimal policy π∗(at|st) = exp(Q(s t,at)∫
expQ(st,at)dat by
plugging the definition of Q into equation 13.
Remind in section 4.1, we have approximated the optimal joint policy by the mean field approximation ∏N i=1 qi(ai|s). We now plug this into the definition of equation 14 and consider the discount factor. Notice it is easy to incorporate the discount factor by defining a absorbing state where each transition have (1− γ) probability to go to that state. Thus we have
V (st+1) := E [ T∑ t′=t+1 ( N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− N∑ i=1 log qi(a t′ i |st ′ ))|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + γEp(st+1|st,at)[V (s t+1)].
(15)
Thus we can further decompose V and Q into Vi and Qi. We define Vi and Qi in the following way.
Vi(s t+1) = E[ T∑ t′=t+1 ( ri(s t′ , at ′ i , a t′ Ni)− log qi(a t′ i |st ′ ) ) |st+1],
Qi(s t, ati, a t Ni) = ri(s t, ati, a t Ni) + γEp(st+1|st,at)[Vi(s t+1)].
Obviously we have V = ∑N i=1 Vi and Q = ∑N i=1Qi.
For Vi, according to our definition, we obtain Vi(s
t) = Eat∼∏Ni=1 qi [ri(st, ati, atNi)− log qi(ati|st) + Ep(st+1|st,at)Vi(st+1)]. (16) Now we relate it to Qi, and have
Vi(s t) = Eat∼∏Ni=1 qi [Qi(sti, ati, atNi)−log qi(ati|st)] = E(ai,aNi)∼(qi,qNi )Qi(sti, ati, atNi)−Eai∼qi log qi(ati|st).
Thus it suggests that we should construct the loss function on Vi and Qi in the following way. In the following, we use parametric family (e.g. neural network) characterized by ηi and κi to approximate Vi and Qi respectively.
J(ηi) = Est∼D[ 1
2
( Vηi(s t)− E(ai,aNi)∼(qi,qNi )[Qκi(s t, ati, a t Ni)]− log qi(a t i|st) )2 ],
J(κi) = E(st,ati,aNt i
)∼D[ 1
2
( Qiκi(s t, ait, a t Ni)− Q̂(s t, ait, a t Ni) )2 ]. (17)
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(st+1|st,at)[Vηi(s t+1)].
Now we are ready to derive the update rule of the policy, i.e., the intention propagation network.
Remind the intention propagation network actually is a mean-field approximation of the joint-policy.
min p1,p2,...,pn KL( N∏ i=1 pi(ai|s)||π∗(a|s)).
It is the optimization over the function pi rather than certain parameters. We have proved that after M iteration of intention propagation, we have output the nearly optimal solution qi.
In the following, we will demonstrate how to update the parameter θ of the propagation network Λθ(a t|st), if we use neural network to approximate it. Again we minimize the KL divergence
min θ EstKL( N∏ i=1 qi,θ(a t i|st)||π∗(at|st))
Plug the π∗(at|st) = exp(Q(s t,at))∫
expQ(st,at)dat into the KL divergence. It is easy to see, it is equivalent to
the following the optimization problem by the definition of the KL divergence.
max θ Est [ Eat∼∏ qi,θ(ati|st)[ N∑ i=1 Qκi(s t, ati, a t Ni)− N∑ i=1 log qi,θ(a t i|st)] ] .
Thus we sample state from the replay buffer and have the loss of the policy as
J(θ) = Est∼D,at∼∏Ni=1 qi,θ(ati|st)[ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)]. | 1. What is the focus and contribution of the paper regarding multi-agent RL algorithms?
2. What are the strengths of the proposed approach, particularly in its scalability and use of communication in a structured environment?
3. What are the weaknesses or areas for improvement regarding the presentation and experimental design?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for additional analyses or comparisons that could enhance the paper's findings? | Review | Review
The paper proposes a scalable approach via intention propagation to learn a multi-agent RL algorithm using communication in a structured environment. An agent encodes its policy and sends the “intention” to the neighboring agents with the assumption that only the closest agents would be the affected by it. The approach involves using techniques from the embedded probabilistic inference literature using mean-field variational inference. The joint-policy is estimated using the mean-field approximation that is obtained via propagating intents in an iterative manner. So this approach helps in avoiding the need to factorize the value function explicitly.
The related works section does a nice survey of related approaches and the paper shows conceptual differences to an earlier proposed MFG that has stricter requirements.
The experiments shown cover many important baselines that are shown to be good baselines in respective environments. IP outperforms all the baselines in three competitive benchmarks.
I have a few questions about the clarity of the presentation.
How important is the graph structure defined by k-means? A comparison with a randomized graph and ablation with different reset time (n) intervals would be interesting.
In the experiments, it would be interesting to check if intention only helps the nearby agents. How does adding/removing agents to the set of neighbors affect learning? A comparison with a fully connected graph should be sufficient. The plot in the Appendix shows results on the CityFlow task which has very structured observation with the set of immediate neighbors always set of 4. Doing such an analysis on a more dynamic environment like MPE would be helpful.
What is the computational cost of a densely connected graph as compared to method without using a fixed topology?
Fig 4c does not show plots until convergence.
Overall I feel some restructuring of the paper would benefit the reader explaining some missing portions of the algorithm. For eg, taking out the environment images from the main text. |
ICLR | Title
Intention Propagation for Multi-agent Reinforcement Learning
Abstract
A hallmark of an AI agent is to mimic human beings to understand and interact with others. In this paper, we propose a collaborative multi-agent reinforcement learning algorithm to learn a joint policy through the interactions over agents. To make a joint decision over the group, each agent makes an initial decision and tells its policy to its neighbors. Then each agent modifies its own policy properly based on received messages and spreads out its plan. As this intention propagation procedure goes on, we prove that it converges to a mean-field approximation of the joint policy with the framework of neural embedded probabilistic inference. We evaluate our algorithm on several large scale challenging tasks and demonstrate that it outperforms previous state-of-the-arts.
1 INTRODUCTION
Collaborative multi-agent reinforcement learning is an important sub-field of the multi-agent reinforcement learning (MARL), where the agents learn to coordinate to achieve joint success. It has wide applications in traffic control (Kuyer et al., 2008), autonomous driving (Shalev-Shwartz et al., 2016) and smart grid (Yang et al., 2018). To learn a coordination, the interactions between agents are indispensable. For instance, humans can reason about other’s behaviors or know other peoples’ intentions through communication and then determine an effective coordination plan. However, how to design a mechanism of such interaction in a principled way and at the same time solve the large scale real-world applications is still a challenging problem.
Recently, there is a surge of interest in solving the collaborative MARL problem (Foerster et al., 2018; Qu et al., 2019; Lowe et al., 2017). Among them, joint policy approaches have demonstrated their superiority (Rashid et al., 2018; Sunehag et al., 2018; Oliehoek et al., 2016). A straightforward approach is to replace the action in the single-agent reinforcement learning by the joint action a = (a1, a2, ..., aN ), while it obviously suffers from the issue of the exponentially large action space. Thus several approaches have been proposed to factorize the joint action space to mitigate such issue, which can be roughly grouped into two categories:
• Factorization on policy. This approach explicitly assumes that π(a|s) := ∏N i=1 πi(ai|s), i.e.,
policies are independent (Foerster et al., 2018; Zhang et al., 2018). To mitigate for the instability issue caused by the independent learner, it generally needs a centralized critic. • Factorization on value function. This approach has a similar spirit but factorizes the joint value function into several utility functions, each just involving the actions of one agent (Rashid et al., 2018; Sunehag et al., 2018).
However, these two approaches lack of the interactions between agents, since in their algorithms agent i does not care about the plan of agent j. Indeed, they may suffer from a phenomenon called relative over-generalization in game theory observed by Wei & Luke (2016); Castellini et al. (2019); Palmer et al. (2018). Approaches based on the coordinate graph would effectively prevent such cases, where the value function is factorized as a summation of utility function on pairwise or local joint action (Guestrin et al., 2002; Böhmer et al., 2020). However, they only can be applied in discrete action, small scale game.
Furthermore, despite the empirical success of the aforementioned work in certain scenarios, it still lacks theoretical insight. In this work, we only make a simple yet realistic assumption: the reward function ri of each agent i just depends on its individual action and the actions of its neighbors (and
state), i.e., ri(s,a) = ri(s, ai, aNi), (1)
where we use Ni to denote the neighbors of agent i, s to denote the global state. It says the goal or decision of agent is explicitly influenced by a small subset Ni of other agents. Note that such an assumption is reasonable in lots of real scenarios. For instance,
• The traffic light at an intersection makes the decision on the phase changing mainly relying on the traffic flow around it and the policies of its neighboring traffic light. • The main goal of a defender in a soccer game is to tackle the opponent’s attacker, while he rarely needs to pay attention to opponent goalkeeper’s strategy.
Based on the assumption in equation 1, we propose a principled multi-agent reinforcement learning algorithm in the framework of probabilistic inference, where the objective is to maximize the long term reward of the group, i.e., ∑∞ t=0 ∑N i=1 γ trti ( see details in section 4).
Note since each agent’s reward depends on its neighbor, we still need a joint policy to maximize the global reward through interactions. In this paper, we derive an iterative procedure for such interaction to learn the joint policy in collaborative MARL and name it intention propagation. Particularly,
• In the first round, each agent i makes an independent decision and spreads out his plan µ̃i(we name it intention) to neighbors. • In the second round, agents i changes its initial intention properly based on its neighbors’ intention µ̃j , j ∈ Ni and propagates its intention µ̃i again. • In the third round, it changes the decision in the second round with a similar argument. • As this procedure goes on, we show the final output of agents’ policy converges to the mean field
approximation (the variational inference method from the probabilistic graphical model (Bishop, 2006)) of the joint policy.
In addition, this joint policy has the form of Markov Random Field induced by the locality of the reward function (proposition 1). Therefore, such a procedure is computationally efficient when the underlying graph is sparse, since in each round, each agent just needs to care about what its neighbors intend to do. Remark: (1) Our work is not related to the mean-field game (MFG) (Yang et al., 2018). The goal of the MFG is to find the Nash equilibrium, while our work aims to the optimal joint policy in the collaborative game. Furthermore, MFG generally assumes agents are identical and interchangeable. When the number of agents goes to infinity, MFG can view the state of other agents as a population state distribution. In our problem, we do not have such assumptions.
(2) our analysis is not limited to the mean-field approximation. When we change the message passing structure of intention propagation, we can show that it converges to other approximation of the joint policy, e.g., loopy belief propagation in variational inference (Yedidia et al., 2001) (see Appendix B.2 ).
Contributions: (1) We propose a principled method named intention propagation to solve the joint policy collaborative MARL problem; (2) Our method is computationally efficient, which can scale up to one thousand agents and thus meets the requirement of real applications; (3) Empirically, it outperforms state-of-the-art baselines with a wide margin when the number of agents is large; (4) Our work builds a bridge between MARL and neural embedded probabilistic inference, which would lead to new algorithms beyond intention propagation.
Notation: sti and ati represent the state and action of agent i at time step t. The neighbors of agent i are represented asNi. We denote X as a random variable with domain X and refer to instantiations of X by the lower case character x. We denote a density on X by p(x) and denote the space of all such densities as by P .
2 RELATED WORK
We first discuss the work of the factorized approaches on the joint policy. COMA designs a MARL algorithm based on the actor-critic framework with independent actors πi(ai|s), where the joint policy is factorized as π(a|s) = ∏N i=1 πi(ai|s) (Foerster et al., 2018). MADDPG considers a MARL with the cooperative or competitive setting, where it creates a critic for each agent (Lowe et al., 2017). Other similar works may include (de Witt et al., 2019; Wei et al., 2018). Another way is to factorize the value functions into several utility functions. Sunehag et al. (2018) assumes that the
overall Q function can be factorized as Q(s, a1, a2, .., aN ) = ∑N i=1Qi(si, ai) . QMIX extends this work to include a richer class of function, where it assumes the overall Q function is a monotonic function w.r.t. each Qi(si, ai) (Rashid et al., 2018). Similarly, Son et al. (2019) further relax the structure constraint on the joint value function. However these factorized methods suffer from the relative overgeneralization issue (Castellini et al., 2019; Palmer et al., 2018). Generally speaking, it pushes the agents to underestimate a certain action because of the low rewards they receive, while they could get a higher one by perfectly coordinating.
A middle ground between the (fully) joint policy and the factorized policy is the coordination graph (Guestrin et al., 2002), where the value function is factorized as a summation of the utility function on the pairwise action. Böhmer et al. (2020); Castellini et al. (2019) combine deep learning techniques with the coordination graph. It addresses the issue of relative overgeneralization, but still has two limitations especially in the large scale MARL problem. (1) The max-sum algorithm can just be implemented in the discrete action space since it needs a max-sum operation on the action of Q function. (2) Even in the discrete action case, each step of the Q learning has to do several loops of max-sum operation over the whole graph if there is a cycle in graph. Our algorithm can handle both discrete and continuous action space cases and alleviate the scalability issue by designing an intention propagation network.
Another category of MARL is to consider the communication among agents. The attention mechanism is used to decide when and who to communicate with (Das et al., 2018). Foerster et al. (2016) propose an end-to-end method to learn communication protocol. In (Liu et al., 2019; Chu et al., 2020), each agent sends the action information to it neighbors. In addition, Chu et al. (2020) require a strong assumption that the MDP has the spatial-temporal Markov property. However, they utilizes neighbor’s action information in a heuristic way and thus it is unclear what the agents are learning (e.g., do they learn the optimal joint policy to maximize the group reward? ). Jiang et al. (2020) propose DGN which uses GNN to spread the state embedding information to neighbors. However each agent still uses an independent Q learning to learn the policy and neglects other agents’ plans. In contrast, we propose a principled algorithm, where each agent makes decision considering other agents’ plan. Such procedure can be parameterized by GNN and other neural networks (see section 4.1 and appendix B.2). We prove its convergence to the solution of variational inference methods.
3 BACKGROUNDS
Probabilistic Reinforcement Learning: Probabilistic reinforcement learning (PRL) (Levine, 2018) is our building block. PRL defines the trajectory τ up to time step T as τ = [s0, a0, s1, a1, ..., sT , aT , sT+1]. The probability distribution of the trajectory τ induced by the optimal policy is defined as p(τ) = [p(s0) ∏T t=0 p(s t+1|st, at)] exp (∑T t=0 r(s t, at) ) . While the probability of the trajectory τ under the policy π(a|s) is defined as p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st, at)π(at|st). The objective is to minimize the KL divergence between p̂(τ) and p(τ). It is equivalent to the maximum entropy reinforcement learning
max π J(π) = T∑ t=0 E[r(st, at) +H(π(at|st))],
where it omits the discount factor γ and regularizer factor α of the entropy term, since it is easy to incorporate them into the transition and reward respectively. Notice in this framework the max operator in the Bellman optimality equation would be replaced by the softmax operator and thus its optimal policy is a softmax function related to the Q function (Haarnoja et al., 2017). Such framework subsumes state-of-the-art algorithms such as soft-actor-critic (SAC) (Haarnoja et al., 2018). In each iteration, SAC optimizes the following loss function of Q,π, V , and respectively.
E(st,at)∼D [ Q(st, at)− r(st, at)− γEst+1∼p[V (st+1)] ]2 ,Est∼DEat∼π[log π(at|st)−Q(st, at)]
Est∼D [ V (st)− Eat∼πθ [Q(st, at)− log π(at|st)] ]2 ,where D is the replay buffer.
Function Space Embedding of Distribution: In our work, we use the tool of embedding in Reproducing Kernel Hilbert Space (RKHS) to design an intention propagation procedure (Smola et al., 2007). We let φ(X) be an implicit feature mapping and X be a random variable with distribution p(x). Embeddings of p(x) is given by µX := EX [φ(X)] = ∫ φ(x)p(x)dx where the distribution is mapped to its expected feature map. By assuming that there exists a feature space such that
the embeddings are injective, we can treat the embedding µX of the density p(x) as a sufficient statistic of the density, i.e., any information we need from the density is preserved in µX (Smola et al., 2007). Such injective assumption generally holds under mild condition (Sriperumbudur et al., 2008). This property is important since we can reformulate a functional f : P → R of p(·) using the embedding only, i.e., f(p(x)) = f̃(µX). It also can be generalized to the operator case. In particular, applying an operator T : P → Rd to a density can be equivalently carried out using its embedding T ◦ p(x) = T̃ ◦ µX , where T̃ : F → Rd is the alternative operator working on the embedding. In practice, µX , f̃ and T̃ have complicated dependence on φ. As such, we approximate them by neural networks, which is known as the neural embedding approach of distribution (Dai et al., 2016).
4 OUR METHOD
In this section, we present our method intention propagation for the collaborative multi-agent reinforcement learning. To begin with, we formally define the problem as a networked MDP. The network is characterized by a graph G = (V, E), where each vertex i ∈ V represents an agent and the edge ij ∈ E means the communication link between agent i and j. We say i,j are neighbors if they are connected by this edge. The corresponding networked MDP is characterized by a tuple ({Si}Ni=1, {Ai}Ni=1, p, {ri}Ni=1, γ,G), where N is the number of agents, Si is the local state of the agent i andAi denotes the set of action available to agent i. We let S := ∏N i=1 Si andA := ∏N i=1Ai be the global state and joint action space respectively. At time step t+1, the global state st+1 ∈ S is drawn from the transition st+1 ∼ p(·|st,at), conditioned on the current state st and the joint action at = (at1, a t 2, ..., a t N ) ∈ A. Each transition yields a reward rti = ri(st,at) for agent i and γ is the discount factor. The aim of our algorithm is to learn a joint policy π(at|st) to maximize the overall long term reward (with an entropy termH(·|s) on the joint action a)
η(π) = E[ ∞∑ t=0 γt( N∑ i=1 rti +H(·|st))],
where each agent i can just observe its own state si and the message from the neighborhood communication. We denote the neighbors of agent i asNi and further assume that the reward ri depends on the state and the actions of itself and its neighbors, i.e., ri(s,a) := ri(s, ai, aNi). Such assumption is reasonable in many real scenarios as we discussed in the introduction. In the following, we start the derivation with the fully observation case, and discuss how to handle the partial observation later. The roadmap of the following derivation : At the beginning, we prove that the optimal policy has a Markov Random Field (MRF) form, which reduces the exponential large searching space to a polynomial one. However implement a MRF policy is not trivial in the RL setting (e.g., sample an action from the policy). Thus we sort to the varational inference method (focus on mean field approximation in the main paper and leave other methods in the appendix). But it would introduce complicated computations. At last we apply the kernel embedding method introduced in section 3 to solve this problem and learn the kernel embedding by neural networks. We also discuss how to handle the partially observable setting.
4.1 REDUCE POLICY SEARCHING SPACE
Recall that our aim is to maximize the long term reward with the entropy term. Therefore, we follow the definition of the optimal policy in the probabilistic reinforcement learning in (Levine, 2018) and obtain the proposition 1. It says under the assumption ri(s, a) = ri(s, ai, aNi), the optimal policy is in the form of Markov Random Field (MRF). We prove the following proposition in I.1.
Proposition 1 The optimal policy has the form π∗(at|st) = 1Z exp( ∑N i=1 ψi(s t, ati, a t Ni)), where
Z is the normalization term.
This proposition is important since it suggests that we should construct the policy π(at|st) with this form, e.g., a parametric family, to contain the optimal policy. If agent i and its neighbors compose a clique, the policy reduces to a MRF and ψ is the potential function. One common example is that the reward is a function on pairwise actions, i.e., r(s,a) = ∑ i∈V r(s, ai) + ∑ (i,j)∈E r(s, ai, aj). Then the policy has the form
π(a|s) = 1 Z exp( ∑ i∈V ψ̃i(s, ai) + ∑ (i,j)∈E ψ̃i,j(s, ai, aj)),
which is the pairwise MRF. For instance, in traffic lights control, we can define a 2-D grid network and the pairwise reward function. The MRF formulation on the policy effectively reduces the policy space comparing with the exponentially large one in the fully connected graph.
A straightforward way to leverage such observation is to define a πθ(at|st) as a MRF, and then apply the policy gradient algorithm, e.g., the following way in SAC. ∇θEst∼DEat∼πθ [log πθ(at|st) − Qκ(s
t,at)]. However it is still very hard to sample joint action at from πθ(at|st). In the next section, we resort to embedding to alleviate such problem.
Recall the remaining problem is how to sample the joint action from a MRF policy. Classical ways include the Markov Chain Monte Carlo method and variational inference. The former provides the guarantee of producing exact samples from the target density but computationally intensive. Therefore it is not applicable in the multi-agent RL setting, since we need to sample action once in each interaction with the environment. As such, we advocate the second approach. Here we use the mean-field approximation for the simplicity of presentation and defer more variational inference methods, e.g., loopy belief propagation, in Appendix B.2. We use an intention propagation network with the embedding of the distribution to represent the update rule of the mean field approximation.
Mean field approximation. We hope to approximate the π∗(a|s) by the mean-field variational family pi
min (p1,p2,...,pN ) KL( N∏ i=1 pi(ai|s)||π∗(a|s)),
where we omit the superscript t to simplify the notation. We denote the optimal solution of above problem as qi. Using the coordinate ascent variational inference,the optimal solution qi should satisfy the following fixed point equation (Bishop, 2006). Since the objective function is (generally) non-convex, such update converges to a local optimum (Blei et al., 2017).
qi(ai|s) ∝ exp ∫ ∏
j 6=i
qj(aj |s) log π∗(a|s)da. (2)
For simplicity of the representation, in the following discussion, we assume that the policy is a pairwise MRF but the methodology applies to more general case with more involved expression. Particularly, we assume π∗(a|s) = 1Z exp( ∑ i∈V ψi(s, ai) + ∑ (i,j)∈E ψij(s, ai, aj)). We plug this into equation 2 and obtain following fixed point equation.
log qi(ai|s) = ci + ψi(s, ai) + ∑ j∈Ni ∫ qj(aj |s)ψij(s, ai, aj)daj , (3)
where ci is some constant that does not depend on ai.
We can understand this mean-field update rule from the perspective of intention propagation. Equation 3 basically says each agent can not make the decision independently. Instead its policy qi should depend on the policies of others, particularly the neighbors in the equation. Clearly, if we can construct the intention propagation corresponding to equation 3, the final policy obtained from intention propagation will converge to the mean-field approximation of the joint policy. However we can not directly apply this update in our algorithm, since it includes a complicated integral. To this end , in the next section we resort to the embedding of the distribution qi (Smola et al., 2007) , which maps the distributions into a reproducing kernel Hilbert space.
Embed the update rule. Observe that the fixed point formulation equation 3 says that qi(ai|s) is a functional of neighborhood marginal distribution {qj(aj |s)}j∈Ni , i.e., qi(ai|s) = f(ai, s, {qj}j∈Ni). Denote the d-dimensinoal embedding of qj(aj |s) by µ̃j =∫ qj(aj |s)φ(aj |s)daj . Notice the form of feature φ is not fixed at the moment and will be learned implicitly by the neural network. Following the assumption that there exists a feature space such that the embeddings are injective in Section 3, we can replace the distribution by its embedding and have the fixed point formulation as
qi(ai|s) = f̃(ai, s, {µ̃j}j∈Ni). (4)
For more theoretical guarantee on the kernel embedding, e.g., convergence rate on the empirical mean of the kernel embedding, please refer to (Smola et al., 2007). Roughly speaking, once there
are enough data, we can believe the learned kernel embedding is close enough to the true kernel embedding. Therefore the update of equation 4 and equation 5 in the following would converge to the fixed point of equation 2. Remind that in section 3 at both sides we can do integration w.r.t. the feature map φ, which yields, µ̃i = ∫ qi(ai|s)φ(ai|s)dai = ∫ f̃(ai, s, {µ̃j}j∈Ni)φ(ai|s)dai. Thus we can rewrite it as a new operator on the embedding, which induces a fixed point equation again µ̃i = T̃ ◦ (s, {µ̃j}j∈Ni). In practice, we do this fix-point update with M iterations.
µ̃mi ← T̃ ◦ (s, {µ̃m−1j }j∈Ni) m = 1, ...,M. (5)
Finally, we output the distribution qi with qi(ai|s) = f̃(ai, s, {µ̃Mj }j∈Ni). In next section, we show how to represent these variables by neural network.
Parameterization by Neural Networks. In general f̃ and T̃ have complicated dependency on ψ and φ. Instead of learning such dependency, we directly approximate f̃ and T̃ by neural networks. For instance, we can represent the operator T̃ in equation 5 by µ̃i = σ(W1s+W2 ∑ j∈Ni µ̃j), where σ is a nonlinear activation function, W1 and W2 are some matrixes with row number equals to d. Interestingly, this is indeed a message passing form of Graph Neural Network (GNN) (Hamilton et al., 2017). Thus we can use M -hop (layer) GNN to represent the fixed-point update in equation 5. If the action space is discrete, the output qi(ai|s) is a softmax function. In this case f̃ is a fully connected layer with softmax output. When it is continuous, we can output a Gaussian distribution with the reparametrization trick (Kingma & Welling, 2019). We denote this intention propagation procedure as intention propagation network Λθ(a|s) with parameter θ in Figure 1(b). Figure 1(a) illustrates the graph and the message passing procedure. Agent 1 receives the embedding (intention) µ̃m−12 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ and spreads its new embedding µ̃m1 at the next iteration. Figure 1(b) gives the details on the parameterization of GNN. Here we use agent 1 as an example. To ease the exposition, we assume agent 1 just has one neighbor, agent 2. Each agent observes its own state si. After a MLP and softmax layer (we do not sample actions here, but just use the probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. Then agent 1 receives the embedding µ̃02 of its neighbor (agent 2). After a GNN layer to combine the information, e.g, µ̃ 1 1 = Relu[W1(s1 + s2) +W2(µ̃ 0 1 + µ̃ 0 2)](W1,W2 are shared across all agents as that in GNN), we obtain new embedding µ̃11 of agent 1. Notice we also do message passing on state, since in practice the global state is not available. In the second layer, we do similar things. We defer detailed discussion and extension to other neural networks to Appendix B due to space constraint.
4.2 ALGORITHM
We are ready to give the overall algorithm by combining all pieces together. All detailed derivation on Vi, Qi for agent i and the corresponding loss function will be given in the appendix I, due to the space constraint. Recall we have a mean-field approximation qi of the joint-policy, which is obtained by M iterations of intention propagation. We represent this procedure by a M-hop graph neural network with parameter θ discussed above. Notice that this factorization is different from the case π(a|s) = ∏N i=1 π(ai|s) in (Zhang et al., 2018; Foerster et al., 2018), since qi(ai|s) depends on the information of other agents’ plan. Using the mean field approximation qi, we can further decompose Q = ∑N i=1Qi and V = ∑N i=1 Vi, see appendix I. We use neural networks to approximate Vi and Qi function with parameter ηi and κi respectively. As that in TD3 (Fujimoto et al., 2018), for each agent i we have a target value network Vη̄i and two Qκi functions to mitigate the overestimation by training them simultaneously with the same data but only selecting minimum of them as the
target in the value update. In the following we denote qi(ai|s) as qi,θ(ai|s) to explicitly indicate its dependence on the intention propagation network Λθ. We use D to denote the replay buffer. The whole algorithm is presented in Algorithm 1.
Loss Functions. The loss of value function Vi:
J(ηi) = Est∼D[ 1
2
( Vηi(s
t)− E(ati,atNi )∼(qi,qNi )[Qκi(s t, ati, a t Ni)− log qi,θ(a t i|st)]
)2 ].
The loss of Qi: J(κi) = E(st,ati,atNi )∼D[ 1 2
( Qκi(s t, ati, a t Ni)− Q̂i(s t, ati, a t Ni) )2 ],
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(·|st,at)[Vη̄i(s t+1)].
The loss of policy: J(θ) = Est∼D,at∼∏Ni=1 qi [ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)].
It is interesting to compare the loss with the counterpart in the single agent SAC in section 3.
• qi,θ(ai|s) is the output of intention propagation network Λθ(a|s) parameterized by a graph neural network. Thus it depends on the policy of other agents. • Qκi depends on the action of itself and its neighbors, which can also be accomplished by the graph neural network in practice.
Algorithm 1 Intention Propagation Inputs: Replay buffer D. Vi, Qi for each agent i. Intention propagation network Λθ(at|s) with outputs {qi,θ}Ni=1. Learning rate lη, lκ,lθ. Moving average parameter τ for the target network for each iteration do
for each environment step do sample at ∼ ∏ qi,θ(a t i|st) from the intention propagation network. st+1 ∼ p(st+1|st,at),
D ← D ⋃(
sti, a t i, r t i , s t+1 i )N i=1
end for for each gradient step do
update ηi, κi, θ, η̄i. ηi ← ηi − lη∇J(ηi), κi ← κi − lκ∇J(κi) θ ← θ − lθ∇J(θ), η̄i ← τηi + (1− τ)η̄i
end for end for
Handle the Partial Observation: So far, we assume that agents can observe global state while in practice, each agent just observes its own state si. Thus besides the communication with the intention propagation, we also do the message passing on the state embedding with the graph neural network. The idea of this local state sharing is similar to (Jiang et al., 2020), while the whole structure of our work is quite different from (Jiang et al., 2020). See the discussion in the related work.
5 EXPERIMENT
In this section, we evaluate our method and eight state-of-the-art baselines on more than ten different scenarios from three popular MARL platforms: (1) CityFlow, a traffic signal control environment
(Tang et al., 2019). It is an advanced version of SUMO (Lopez et al., 2018) widely used in MARL community. (2) multiple particle environment (MPE) (Mordatch & Abbeel, 2017) and (3) grid-world platform MAgent (Zheng et al., 2018). Our intention propagation (IP) empirically outperforms all baselines on all scenarios especially on the large scale problem.
5.1 SETTINGS
We give a brief introduction to the settings of the experiment and defer the details such as hyperparameter tuning of intention propagation and baselines to appendix D. Notice all algorithms are tested in the partially observable setting, i.e., each agent just can observe its own state si.
In traffic signal control problem (Left panel in Figure 2), each traffic light at the intersection is an agent. The goal is to learn policies of traffic lights to reduce the average waiting time to alleviate the traffic jam. Graph for cityflow: graph is a 2-D grid induced by the map (e.g. Figure 2). The roads are the edges which connects the agents. We can define the cost −ri as the traveling time of vehicle around the intersection i, thus the total cost indicates the average traveling time. Obviously, ri has a close relationship with the action of neighbors of agent i but has little dependence on the traffic lights far away. Therefore our assumption on reward function holds. We evaluate different methods on both real-world and synthetic traffic data under the different numbers of intersections.
MPE (Mordatch & Abbeel, 2017) and MAgent (Zheng et al., 2018) (Figure 2) are popular particle environments on MARL (Lowe et al., 2017; Jiang et al., 2020). Graph for particle environments : for each agent, it has connections (i.e., the edge of the graph) with k nearest neighbors. Since the graph is dynamic, we update the adjacency matrix of the graph every n step, e.g., n = 5 steps. It is just a small overhead comparing with the training of the neural networks. The reward functions also have local property, since they are explicitly or implicitly affected by the distance between agents. For instance, in heterogeneous navigation, if small agents collide with big agents, they will obtain a large negative reward. Thus their reward depends on the action of the nearby agents. Similarly, in the jungle environment, agent can attack the agents nearby to obtain a high reward.
Baselines. We compare our method against eight different baselines mentioned in introduction and related work section: QMIX (Rashid et al., 2018); MADDPG (Lowe et al., 2017); permutation invariant critic (PIC) (Liu et al., 2019); graph convolutional reinforcement learning (DGN) (Jiang et al., 2020); independent Q-learning (IQL) (Tan, 1993); permutation invariant MADDPG with data shuffling mechanism (MADDPGS); COMA (Foerster et al., 2018); MFQ (Yang et al., 2018). These baselines are reported as the leading algorithm of solving tasks in CityFlow, MPE and MAgent. Among them, DGN and MFQ need the communication with neighbors in the training and execution. Also notice that PIC assumes the actor can observe the global state. Thus in the partially observable setting, each agent in PIC also needs to communicate to get the global state information in the training and the execution. Further details on baselines are given in appendix E.1.
Neural Network and Parameters. Recall the intention propagation network is represented by GNN. In our experiment, our graph neural network has hop = 2 (2 GNN layers, i.e., M = 2) and 1 fully-connected layer at the top. Each layer contains 128 hidden units. Other hyperparameters are listed in appendix H.
5.2 COMPARISON TO STATE-OF-THE-ART
In this section, we compare intention propagation (IP) with other baselines. The experiments are evaluated by average episode reward (Lowe et al., 2017). For CityFlow tasks, average reward refers
to negative average travel time. All experiments are repeated for 5 runs with different random seeds. We report the mean and standard deviation in the curves. We report the results on six experiments and defer all the others to appendix G due to the limit of space.
CityFlow. We first evaluate our algorithm on traffic control problem. Particularly, we increase the number of intersections (agents) gradually to increase the difficulties of the tasks. Figure 3 presents the performance of different methods on both real-world and synthetic CityFlow data with different number of intersections. On the task of Manhattan City, intention propagation (IP) method, the baseline methods PIC and DGN achieve better reward than the other methods while our method approaches higher reward within fewer steps. On the larger task (N=100), both PIC and DGN have large variance and obtain poor performance. The experiment with N=1225 agents is an extremely challenging task. Our algorithm outperforms all baselines with a wide margin. The runner-up is MADDPG with data shuffling mechanism. Its final performance is around −4646 and suffers from large variance. In contrast, the performance of our method is around −569 (much higher than the baselines). It’s clear that, in both real-world and synthetic cityflow scenarios, the proposed IP method obtains the best performance. We defer further experimental results to appendix G.
MPE and MAgent. Figure 4 demonstrates the performance of different methods on other three representative scenario instances: a small task cooperative navigation (N=30) and two large-scale tasks heterogeneous navigation (N=100) and prey and predator (N=100). We run all algorithms long enough (more than 1e6 steps). In all experiments, our algorithm performs best. For cooperative navigation, MADDPGS performs better than MADDPG. The potential improvement comes from data-shuffling mechanism, which makes MADDPGS more robust to handle the manually specified order of agents. QMIX performs much better than MADDPG, MADDPGS and IQL. However, its performance is not stable even on small setting (N=30). DGN is better and more stable than QMIX. However, when solving large-scale settings, its performance is much worse than PIC and our intention propagation (IP). Although PIC can solve large-scale tasks, our IP method is still much better. In prey and predator, there are two groups of agents: good agents and adversaries. To make a fair comparison of rewards of different methods, we fix good agents’ policies and use all the methods to learn the adversaries’ policies. Such setting is commonly used in many articles (Lowe et al., 2017; Liu et al., 2019).
Stability. Stability is a key criterion to evaluate MARL. In all experiments, our method is quite stable with small variance. For instance, as shown in Figure 3 (b), DGN approaches −1210± 419 on the CityFlow scenario with N=100 intersections while our method approaches −465 ± 20 after 1.6× 106 steps (much better and stable). The reason is that to make the joint decision, the agent in our algorithm can adjust its own policy properly by considering other agents’ plans.
Ablation Study: We conduct a set of ablation studies related to the effect of joint policy, graph, hop size, number of neighbors and the assumption of the reward function. Particularly, we find the joint policy is essential for the good performance. In Cityflow, the performance of traffic graph (2-d grid induced by the roadmap) is better than the fully connected graph. In MPE and MAgent, We define the adjacent matrix based on the k nearest neighbors and pick k = 8 in large scale problem and k = 4 in small scale problem. In all of our experiment, we choose the 2-hop GNN. Because of the limitation of space, we just summarize our conclusion here and place the details in appendix F.
A ORGANIZATION OF THE APPENDIX
In appendix B, we give the details on the intention propagation network and parameterization of the GNN. We explain intention propgation from the view of the MARL. At last, we extend the intention propagation to other approximations which converges to other solutions of the variational inference. Notice such extension on the algorithm can also be easily parameterized by neural networks.
In Appendix C, we give the details of the algorithm deferred from the main paper. Appendix D summarizes the configuration of the experiment and MARL environment. Appendix E gives more details on baselines and the hyperparameters of GNN used in our model. Appendix F conducts the ablation study deferred from the main paper. Appendix G and H give more experimental results and hyperparameters used in the algorithms. At appendix I, we derive the algorithm and prove the proposition 1.
B INTENTION PROPAGATION NETWORK
B.1 DETAILS ON THE INTENTION PROPAGATION NETWORK
In this section, we give the details on the intention propagation network deferred from the main paper. We first illustrate the message passing of the intention propagation derived in section 4.1. Then we give a details on how to construct graph neural network.
Message passing and explanation from the view of MARL: µ̃i is the embedding of policy of agent i, which represents the intention of the agent i. At 0 iteration, every agent makes independent decision. The policy of agent i is mapped into its embedding µ̃0i . We call it the intention of agent i at iteration 0. Then agent i sends its plan to its neighbors . In Figure 5, µ̃mi is the d dimensional (d = 3 in this figure) embedding of qi at m−th iteration of intention propagation. We draw the update of µ̃(m)1 as example. Agent 1 receives the embedding (intention) µ̃ m−1 2 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ . After M iterations, we obtain µ̃M1 and output the policy distribution q1 using equation 4. Similar procedure holds for other agents. At each RL step t, we do this procedure (with M iterations) once to generate joint policy. M in general is small, e.g., M = 2 or 3. Thus it is efficient.
Parameterization on GNN: We then illustrate the parameterization of graph neural network in Figure 6. If the action space is discrete, the output qi(ai|s) is a softmax function. When it is continuous, we can output a Gaussian distribution (mean and variance) with the reparametrization trick (Kingma & Welling, 2019). Here, we draw 2-hop (layer) GNN to parameterize it in discrete action intention propagation. In Figure 6 (b), each agent observe its own state si. After a MLP and softmax layer (we do not sample here, and just use the output probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. In the following, we use agent 1 as an example. To ease the exposition, we assume Agent 1 just has one neighbor, agent 2. Agent 1 receives the embedding µ̃02 of its neighbor. After a GNN layer to combine the information, e.g, Relu[W1(s1 + s2) + W2(µ̃ 0 1 + µ̃ 0 2)], we obtain new embedding µ̃ 1 1 of agent 1. Notice we also do
message passing on state, since in practice the global state is not available. In the second layer, we do similar things. Agent 1 receives the embedding information of µ̃12 from its neighbors and get a new embedding µ̃21. Then this embedding passes a MLP+softmax layer and output probability of action, i.e. q1(a1|s).
B.2 EXTENSION TO OTHER VARIATIONAL INFERENCE METHODS AND NEURAL NETWORKS
In this section, we show how to approximate the joint policy with the Loopy Belief Propagation in the variational inference (Yedidia et al., 2001). This will lead to a new form of neural networks beyond vanilla GNN that we illustrate above.
The objective function in Loop Belief Propagation is the Beth Free energy (Yedidia et al., 2001). Different from the mean-field approximation, it introduces another variational variable qij , which brings more flexibility on the approximation. The following is objective function in our case.
min qi,qij∈E − ∑ i (|Ni| − 1) ∫ qi(ai|s) log qi(ai|s) ψi(s, ai) dai
+ ∑ ij ∫ qij(ai, aj |s) log qij(ai, aj |s) ψij(s, ai, aj)ψi(s, ai)ψj(s, aj) daidaj .
s.t. ∫ qij(ai, aj |s)daj = qi(aj |s), ∫ qij(ai, aj |s)dai = qj(aj |s)
(6)
Solve above problem, we have the fixed point algorithm mij(aj |s)← ∫ ∏
k∈Ni\j
mki(ai|s)ψi(s, ai)ψij(s, ai, aj)dai,
qi(ai|s)← ψi(s, ai) ∏ j∈Ni mji(ai|s).
Similar to the mean-field approximation case, we have mij(aj |s) = f(aj , s, {mki}k∈Ni\j), qi(ai|s) = g(ai, s, {mki}k∈Ni),
It says the message mij and marginals qi are functionals of messages from neighbors. Denote the embedding ν̃ij = ∫ ψj(s, aj)mij(aj |s)daj and µ̃i = ∫ ψi(s, ai)qi(ai|s)dai, we have
ν̃ij = T̃ ◦ ( s, {ν̃ki}k∈Ni\j ) , µ̃i = T̃ ◦ ( s, {ν̃ki}k∈Ni ) .
Again, we can parameterize above equation by (graph) neural network ν̃ij = σ ( W1s +
W2 ∑ k∈Ni\j ν̃ki ) , µ̃i = σ ( W3s+W4 ∑ k∈Ni ν̃ki ) .
Following similar way, we can derive different intention propagation algorithms by changing different objective function which corresponds to e.g., double-loop belief propagation(Yuille, 2002), tree-reweighted belief propagation (Wainwright et al., 2003) and many others.
C ALGORITHM
We present some remarks of the algorithm Intention Propagation (algorithm 1) deferred from the main paper.
Remark: To calculate the loss function J(ηi), each agent need to sample the global state and (ai, aNi). Thus we first sample a global state from the replay buffer and then sample all action a once using the intention propagation network.
D FURTHER DETAILS ABOUT ENVIRONMENTS AND EXPERIMETAL SETTING
Table 1 summarizes the setting of the tasks in our experiment.
D.1 CITYFLOW
CityFlow (Tang et al., 2019) is an open-source MARL environment for large-scale city traffic signal control 1. After the traffic road map and flow data being fed into the simulators, each vehicle moves from its origin location to the destination. The traffic data contains bidirectional and dynamic flows with turning traffic. We evaluate different methods on both real-world and synthetic traffic data. For real-world data, we select traffic flow data from Gudang sub-district, Hangzhou, China and Manhattan, USA 2. For synthetic data, we simulate several different road networks: 7 × 7 grid network (N = 49) and large-scale grid networks with N = 10 × 10 = 100 , 15 × 15 = 225, 35 × 35 = 1225. Each traffic light at the intersection is the agent. In the real-world setting (Hang Zhou, Manhattan), the graph is a 2-d grid induced by the roadmap. Particularly, the roads are edges which connect the node (agent) of the graph. For the synthetic data, the map is a n ∗ n 2-d grid (Something like Figure 7), where edges represents road, node is the traffic light. We present the experimental results deferred from the main paper in Figure 10.
D.2 MPE
In MPE (Mordatch & Abbeel, 2017) 3, the observation of each agent contains relative location and velocity of neighboring agents and landmarks. The number of visible neighbors in an agent’s observation is equal to or less than 10. In some scenarios, the observation may contain relative location and velocity of neighboring agents and landmarks.
1https://github.com/cityflow-project/CityFlow 2We download the maps from https://github.com/traffic-signal-control/ sample-code. 3To make the environment more computation-efficient, Liu et al. (2019) provided an improved version of MPE. The code are released in https://github.com/IouJenLiu/PIC.
We consider four scenarios in MPE. (1) cooperative navigation: N agents work together and move to cover L landmarks. If these agents get closer to landmarks, they will obtain a larger reward. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and N agents. The observation dimension is 26. (2) prey and predator: N slower cooperating agents must chase the faster adversaries around a randomly generated environment with L large landmarks. Note that, the landmarks impede the way of all agents and adversaries. This property makes the scenario much more challenging. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and 5 preys. The observation dimension is 34. (3) cooperative push: N cooperating agents are rewarded to push a large ball to a landmark. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 28. (4) heterogeneous navigation: this scenario is similar with cooperative navigation except dividing N agents into N2 big and slow agents and N 2 small and fast agents. If small agents collide with big agents, they will obtain a large negative reward. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 26.
Further details about this environment can be found at https://github.com/IouJenLiu/ PIC.
D.3 MAGENT
MAgent (Zheng et al., 2018) is a grid-world platform and serves another popular environment platform for evaluating MARL algorithms. Jiang et al. (2020) tested their method on two scenarios: jungle and battle. In jungle, there are N agents and F foods. The agents are rewarded by positive reward if they eat food, but gets higher reward if they attack other agents. This is an interesting scenario, which is called by moral dilemma. In battle, N agents learn to fight against several enemies, which is very similar with the prey and predator scenario in MPE. In our experiment, we evaluate our methods on jungle.
In our experiment, the size for the grid-world environment is 30 × 30. Each agent refers to one grid and can observe 11 × 11 grids centered at the agent and its own coordinates. The actions includes moving and attacking along the coordinates. Further details about this environment can be found at https://github.com/geek-ai/MAgent and https://github.com/ PKU-AI-Edge/DGN.
E FURTHER DETAILS ON SETTINGS
E.1 DESCRIPTION OF OUR BASELINES
We compare our method with multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), a strong actor-critic algorithm based on the framework of centralized training with decentralized execution; QMIX (Rashid et al., 2018), a q-learning based monotonic value function factorisation algorithm; permutation invariant critic (PIC) (Liu et al., 2019), a leading algorithm on MPE yielding identical output irrespective of the agent permutation; graph convolutional reinforcement learning (DGN) (Jiang et al., 2020), a deep q-learning algorithm based on deep convolutional graph neural network with multi-head attention, which is a leading algorithm on MAgent; independent Q-learning (IQL) (Tan, 1993), decomposing a multi-agent problem into a collection of simultaneous single-agent problems that share the same environment, which usually serves as a surprisingly strong benchmark in the mixed and competitive games (Tampuu et al., 2017). In homogeneous settings, the input to the centralized critic in MADDPG is the concatenation of all agent’s observations and actions along the specified agent order, which doesn’t hold the property of permutation invariance. We follow the similar setting in (Liu et al., 2019) and shuffle the agents’ observations and actions in training batch 4. In COMA (Foerster et al., 2018), it directly assume the poilcy is factorized. It calculates the counterfactual baseline to address the credit assignment problem in MARL. In our experiment, since we can observe each reward function, each agent can directly approximate the Q function without counterfactual baseline. MFQ derives the algorithm from the view of mean-field game(Yang et al., 2018). Notice the aim of mean-field game is to find the Nash equilibrium rather
4This operation doesn’t change the state of the actions.
than maxmization of the total reward of the group. Further more, it needs the assumption that agents are identical.
E.2 NEURAL NETWORKS ARCHITECTURE
To learn feature from structural graph build by the space distance for different agents, we design our graph neural network based on the idea of a strong graph embedding tool structure2vec (Dai et al., 2016), which is an effective and scalable approach for structured data representation through embedding latent variable models into feature spaces. Structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. After using M graph neural network layers, each node can receive the information fromM -hops neighbors by message passing. Recently, attention mechanism empirically leads to more powerful representation on graph data (Veličković et al., 2017; Jiang et al., 2020). We employ this idea into our graph neural network. In some settings, such as heterogeneous navigation scenario from MPE, the observations of different group of agents are heterogeneous. To handle this issue, we use different nonlinear functions to extract the features from heterogeneous observations and map the observations into a latent layer, then use the same graph neural networks to learn the policy for all types of agents. In our experiment, our graph neural network has M = 2 layers and 1 fully-connected layer at the top. Each layer contains 128 hidden units.
F ABLATION STUDIES
F.1 INDEPENDENT POLICY VS INTENTION PROPAGATION.
We first give a toy example where the independent policy (without communication) fails. To implement such algorithm, we just replace the intention propagation network by a independent policy network and remain other parts the same. Think about a 3× 3 2d-grid in Figure 7 where the global state (can be observed by all agents) is a constant scalar (thus no information). Each agent chooses an action ai = 0 or 1. The aim is to maximize a reward−(a1−a2)2−(a1−a4)2−(a2−a3)2− ...− (a8−a9)2, (i.e., summation of the reward function on edges). Obviously the optimal value is 0. The optimal policy for agents is a1 = a2 =, ..., a9 = 0 or a1 = a2 =, ..., a9 = 1. However independent policy fails, since each agents does not know how its allies pick the action. Thus the learned policy is random. We show the result of this toy example in Figure 7, where intention propagation learns optimal policy.
F.2 GRAPH TYPES, NUMBER OF NEIGHBORS, AND HOP SIZE
We conduct a set of ablation studies related to graph types, the number of neighbors, and hop size. Figure 8(a) and Figure 8(b) demonstrate the performance of our method on traffic graph and fullyconnected graph on the scenarios (N=49 and N=100) of CityFlow. In the experiment, each agent can only get the information from its neighbors through message passing (state embedding and the policy embedding). The result makes sense, since the traffic graph represents the structure of the
map. Although the agent in the fully connected graph would obtain global information, it may introduce irrelevant information from agents far away.
Figure 8(c) and Figure 8(d) demonstrate the performance under different number of neighbors and hop size on cooperative navigation (N=30) respectively. The algorithm with neighbors=8 has the best performance. Again the the fully connected graph (neighbors=30) may introduce the irrelevant information of the agents far away. Thus its performance is worse than the algorithm with graph constructed by the K-nearest neighbor. In addition the fully connected graph introduces more computations in the training. In Figure 8(d), we increase the hop-size from 1 to 3. The performance of IP with hop=2 is much better than that with hop=1. While IP with hop=3 is just slightly better than that with hop=2. It means graph neural network with hop size =2 has aggregated enough information.
In Figure 8(e), we test the importance of the k-nearest neighbor structure. IP(neighbors=3)+random means that we pick 3 agents uniformly at random as the neighbors. Obviously, IP with K-nearest neighbors outperforms the IP with random graph a lot. In Figure 8(f), we update adjacency matrix every 1, 5, 10 steps. IP(neighbors=8) denotes that we update the adjacency matrix every step, while IP(neighbors=8)+reset(5) and IP(neighbors=8)+reset(10) denote that we update adjacency matrix every 5 and 10 steps respectively. Obviously, IP(neighbors=8) has the best result. IP(neighbors=8)+reset(5) is better than IP(neighbors=8)+reset(10). The result makes sense, since the adjacency matrix is more accurate if the update interval is smaller.
F.3 ASSUMPTION VIOLATION
The aforementioned experimental evaluations are based on the mild assumption: the actions of agents that are far away would not affect the learner because of their physical distance. It would be interesting to see the performance where the assumption is violated. As such, we modify the reward in the experiment of cooperative navigation. In particular, the reward is defined by r = r1 + r2, where r1 encourages the agents to cover (get close to) landmarks and r2 is the log function of the distances between agents (farther agents have larger impact). To make a violation, we let r2 dominate the reward. We conduct the experiments with hop = 1, 2, 3. Figure 9 shows that the rewards obtained by our methods are 4115 ± 21, 4564 ± 22, and 4586 ± 25 respectively. It’s expected in this scenario, since we should use large hop to collect information from the far-away agents.
G FURTHER EXPERIMENTAL RESULTS
For most of the experiments, we run them long enough with 1 million to 1.5 million steps and stop (even in some cases our algorithm does not converge to the asymptotic result), since every experment in MARL may cost several days. We present the results on Cityflow in Figure 10. Figure 11 provides the experimental results on the cooperative navigation instances with N = 15, N = 30 and N = 200 agents. Note that, the instance with N = 200 is a large-scale and challenging multiagents reinforcement learning setting (Chen et al., 2018; Liu et al., 2019), which typically needs several days to run millions of steps. It’s clear that IQL, MADDPG, MADDPG perform well in the small setting (N=15), however, they failed in large-scale instances (N = 30 and N = 200). In the instance withN = 30, MADDPGS performs better than MADDPG. The potential reason is that with the help of shuffling, MADDPGS is more robust to handle the manually specified order of agents. Although QMIX performs well in the instance of N = 15 and N = 30, it has large variances in both settings. DGN using graph convolutional network can hold the property of permutation invariance, it obtains much better performance than QMIX on these two settings. However, it also fails to solve the large-scale settings with N = 200 agents. Empirically, after 1.5 × 106 steps, PIC obtains a large reward (−425085 ± 31259) on this large-scale setting. Despite all these, the proposed intention propagation (IP) approaches −329229 ± 14730 and is much better than PIC. Furthermore, Figure 11 shows the results of different methods on (d) jungle (N=20, F=12) and (e) prey and predator (N=100). The experimental results shows our method can beats all baselines on these two tasks. On the scenario of cooperative push (N=100) as shown in Figure 11(f), it’s clear that DGN, QMIX, IQL, MADDPG and MADDPGS all fail to converge to good rewards after 1.5× 106 environmental steps. In contrast, PIC and the proposed IP method obtain much better rewards than these baselines. Limited by the computational resources, we only show the long-term performance of the best two methods. Figure 11(f) shows that IP is slightly better than PIC in this setting.
G.1 POLICY INTERPRETATION
Explicitly analyzing the policy learned by deep multi-agent reinforcement learning algorithm is a challenging task, especially for the large-scale problem. We follow the similar ideas from (Zheng et al., 2019) and analyze the learned policy on CityFlow in the following way: We select the same period of environmental steps within [210000, 1600000] and group these steps into 69 intervals (each interval contains about 20000 steps). We compute the ratio of vehicle volume on each movement and the sampled action volume from the learned policy (each movement can be assigned to one action according to the internal function in CityFlow). We define the ratio of vehicle volume over all movements as the vehicle volume distribution and define the ratio of the sampled action volume from the learned policy over all movements as the sampled action distribution. It’s expected that a good MARL algorithm will hold the property: these two distributions will very similar over a period of time. Figure 12 reports their KL divergence by intervals. It’s clear that the proposed intention propagation method (IP) obtains the lowest KL divergence (much better than the state-of-the-art baselines). Because KL divergence is not symmetrical metric, we also calculate their Euclidean distances. Specifically, the distance of our method is 0.0271 while DGN is 0.0938 and PIC is 0.0933.
H HYPERPARAMETERS
The parameter on the environment. For the max episode length, we follow the similar settings like that in the baselines (Lowe et al., 2017) . Particularly, we set 25 for MPE and set 100 for CityFlow. For MAgent, we find that setting the max episode length by 25 is better than 100. All the methods share the same setting.
We list the range of hyperparameter that we tune in all baselines and intention propagation. γ : {0.95, 0.98, 0.99, 0.999}, learning rate : {1, 5, 10, 100}×1e-4. activation function: {relu, gelu, tanh}, batch size:{128, 256, 512, 1024}, gradient steps: {1, 2, 4, 8}. Number of hidden units in MLP: {32, 64, 128, 256, 512}, number of layers in MLP:{1, 2, 3} in all experiment. In Qmix, GRU hidden unites are {64, 128}. A fully connected layer is before and after GRU. Hypernetwork and mixing network are both single layer network(64 hidden units with Relu activation from the Qmix paper). The parameter of intention propagation is reported in Table.2.
I DERIVATION
I.1 PROOF OF PROPOSITION 1
We prove the result by induction using the backward view. To see that, plug r(st,at) = ∑N i=1 ri(s t, ati, a t Ni) into the distribution of the optimal policy defined in section 3.
p(τ) = [p(s0) T∏ t=0 p(st+1|st,at)] exp T∑ t=0 N∑ i=1 ri(s t, ati, a t Ni)
Recall the goal is to find the best approximation of π(at|st) such that the trajectory distribution p̂(τ) induced by this policy can match the optimal trajectory probability p(τ). Thus we minimize the KL divergence between them minπDKL(p̂(τ)||p(τ)), where p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st,at)π(at|st). We can do optimization w.r.t. π(at|st) as that in (Levine, 2018) and obtain a backward algorithm on the policy π∗(at|st) (See equation 13 in I.2.)
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (7)
Using the result equation 7, when t = T , the optimal policy is
π∗(aT |sT ) = 1 Z exp( N∑ i=1 ri(s T , aTi , a T Ni)).
Obviously, it satisfies the form π∗(aT |sT ) = 1Z exp( ∑N i=1 ψi(s T , aTi , a T Ni)).
Now suppose from step t+ 1 to T , we have
π∗(at ′ |st ′ ) =
1 Z exp( N∑ i=1 ψi(s t′ , at ′ i , a t′ Ni)) (8)
for t′ = t+ 1, ..., T .
Recall that we have the result
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t Ni)− T∑ t′=t+1 log π∗(at ′ |st ′ )] ) .
(9)
Now plug equation 8 into equation 9, we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 N∑ i=1 ψi(s t′ i , a t′ i , a t′ Ni) + C] ) ,
(10)
where C is some constant related to the normalization term. Thus, we redefine a new term
ψ̃i(s t, at, atNi) = Ep(st+1:T ,at+1:T |st,at) [ T∑ t=t′ ( ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 ψi(s t′ , at ′ , at ′ Ni) )] . (11)
Then obviously π∗(at|st) satisfies the form what we need by absorbing the constant C into the normalization term . Thus we have the result.
I.2 DERIVATION OF THE ALGORITHM
We start the derivation with minimization of the KL divergence KL(p̂(τ)||p(τ)), where p(τ) = [p(s0) ∏T t=0 p(s t+1|st,at)] exp (∑T t=0 ∑N i=1 ri(s t, ati, a t Ni) ) , p̂(τ) =
p(s0) ∏T t=0 p(s t+1|st,at)π(at|st).
KL(p̂(τ)||p(τ)) =Eτ∼p̂(τ) T∑ t=0 ( N∑ i=1 ri(s t, ati, a i Ni)− log π(a t|st) )
= ∑ τ [p(s0) T∏ t=0 p(st+1|st,at)π(at|st)] T∑ t=0 ( N∑ i=1 ri(s t, ati, a t Ni)− log π(a t|st) ) .
(12)
Now we optimize KL divergence w.r.t π(·|st). Considering the constraint ∑ j π(j|st) = 1, we in-
troduce a Lagrangian multiplier λ( ∑|A| j=1 π(j|st) − 1) (Rigorously speaking, we need to consider another constraint that each element of π is larger than 0, but later we will see the optimal value satisfies this constraint automatically). Now we take gradient ofKL(p̂(τ)||p(τ))+λ( ∑|A| j=1 π(j|st)−1) w.r.t π(·|s), set it to zero, and obtain
log π∗(at|st) = Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )]− 1 + λ.
Therefore
π∗(at|st) ∝ exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) .
Since we know ∑ j π(j|st) = 1, thus we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (13)
For convenience, we define the soft V function and Q function as that in (Levine, 2018), and will show how to decompose them into Vi and Qi later.
V (st+1) := E [ T∑ t′=t+1 N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− log π(a t′ |st ′ )|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + Ep(st+1|st,at)[V (s t+1)]
(14)
Thus V (st) = Eπ[Q(st, at) − log π(at|st)]. The optimal policy π∗(at|st) = exp(Q(s t,at)∫
expQ(st,at)dat by
plugging the definition of Q into equation 13.
Remind in section 4.1, we have approximated the optimal joint policy by the mean field approximation ∏N i=1 qi(ai|s). We now plug this into the definition of equation 14 and consider the discount factor. Notice it is easy to incorporate the discount factor by defining a absorbing state where each transition have (1− γ) probability to go to that state. Thus we have
V (st+1) := E [ T∑ t′=t+1 ( N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− N∑ i=1 log qi(a t′ i |st ′ ))|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + γEp(st+1|st,at)[V (s t+1)].
(15)
Thus we can further decompose V and Q into Vi and Qi. We define Vi and Qi in the following way.
Vi(s t+1) = E[ T∑ t′=t+1 ( ri(s t′ , at ′ i , a t′ Ni)− log qi(a t′ i |st ′ ) ) |st+1],
Qi(s t, ati, a t Ni) = ri(s t, ati, a t Ni) + γEp(st+1|st,at)[Vi(s t+1)].
Obviously we have V = ∑N i=1 Vi and Q = ∑N i=1Qi.
For Vi, according to our definition, we obtain Vi(s
t) = Eat∼∏Ni=1 qi [ri(st, ati, atNi)− log qi(ati|st) + Ep(st+1|st,at)Vi(st+1)]. (16) Now we relate it to Qi, and have
Vi(s t) = Eat∼∏Ni=1 qi [Qi(sti, ati, atNi)−log qi(ati|st)] = E(ai,aNi)∼(qi,qNi )Qi(sti, ati, atNi)−Eai∼qi log qi(ati|st).
Thus it suggests that we should construct the loss function on Vi and Qi in the following way. In the following, we use parametric family (e.g. neural network) characterized by ηi and κi to approximate Vi and Qi respectively.
J(ηi) = Est∼D[ 1
2
( Vηi(s t)− E(ai,aNi)∼(qi,qNi )[Qκi(s t, ati, a t Ni)]− log qi(a t i|st) )2 ],
J(κi) = E(st,ati,aNt i
)∼D[ 1
2
( Qiκi(s t, ait, a t Ni)− Q̂(s t, ait, a t Ni) )2 ]. (17)
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(st+1|st,at)[Vηi(s t+1)].
Now we are ready to derive the update rule of the policy, i.e., the intention propagation network.
Remind the intention propagation network actually is a mean-field approximation of the joint-policy.
min p1,p2,...,pn KL( N∏ i=1 pi(ai|s)||π∗(a|s)).
It is the optimization over the function pi rather than certain parameters. We have proved that after M iteration of intention propagation, we have output the nearly optimal solution qi.
In the following, we will demonstrate how to update the parameter θ of the propagation network Λθ(a t|st), if we use neural network to approximate it. Again we minimize the KL divergence
min θ EstKL( N∏ i=1 qi,θ(a t i|st)||π∗(at|st))
Plug the π∗(at|st) = exp(Q(s t,at))∫
expQ(st,at)dat into the KL divergence. It is easy to see, it is equivalent to
the following the optimization problem by the definition of the KL divergence.
max θ Est [ Eat∼∏ qi,θ(ati|st)[ N∑ i=1 Qκi(s t, ati, a t Ni)− N∑ i=1 log qi,θ(a t i|st)] ] .
Thus we sample state from the replay buffer and have the loss of the policy as
J(θ) = Est∼D,at∼∏Ni=1 qi,θ(ati|st)[ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)]. | 1. What is the focus and contribution of the paper regarding cooperative multiagent MARL?
2. What are the strengths and weaknesses of the proposed approach, particularly in its architecture, loss functions, and theoretical analysis?
3. Do you have any concerns or questions regarding the algorithmic novelty and its connection to intention semantics?
4. How does the reviewer assess the clarity, quality, and impact of the paper's content? | Review | Review
Paper Summary The paper considers the cooperative multiagent MARL setting where each agent’s reward depends on the state and the actions of itself and its neighbors The paper has a theoretical claim that, for such reward structure, the optimal maximum entropy joint policy in the form that can be factored into potential functions, one for each agent. In particular, if the sum of all agents’ rewards is a function on pairwise actions, those potential functions are one for each agent and one for each pair of actions (i.e. the equation after Proposition 1). Then, the paper proposes to use mean-field approximation to approximate the optimal joint policy (Equation (3)), which leads to a concrete algorithm that relies on passing the embedding of each agent’s local policy around to neighbors. The paper then empirically shows that the algorithm is particularly effective for domains with a large number of agents.
Major Comments/Questions
Although the motivation has an interpretation of intention propagation, the resulting architecture (Figure 1b) and loss functions (Section 4.2) seems to be a standard messaging passing architecture with SAC loss functions that loses the intention semantics. I do not see too much algorithmic novelty here.
For the baselines used in the experiments, it seems that only IP and DGN allow communication/message passing during execution, which makes it unsurprising that the two methods outperform other baselines.
Minor Comments/Questions
The beginning of Section 3 says the paper considers maximum entropy as the optimization objective, while eta(pi) at the beginning of Section 4 says the objective is long-term reward (no entropy). This seems to be an inconsistency here.
For the assumptions on rewards, Proposition 1 assumes that each agent’s reward depends on its neighbors, while the derivation of Equation (3) (and thus the following algorithm) further assumes that the reward depends on pairwise actions. It is a little bit unclear what assumptions are required for all the theoretical and experimental claims of this paper.
Is there reason to believe that the multi-round message passing will converge to the fixed-point of Equation (2)?
What is the "overgeneralization issue"?
Overall (weak accept) The paper has a clear introduction and motivation of the proposed algorithm. The insight that optimal maximum entropy joint policy takes the format of Markov Random Field might be of some value and interest. However, I don’t think the resulting method has much algorithmic novelty.
Thanks for the response and I've increased my score. I am satisfied with the response but still not convinced about the algorithmic novelty on the intention semantics built into the method, even after reading B.1. In particular, it seems that the loss functions do not drive mu's represented by NNs to the fixed point solution of Eq (3); psi shows up in Eq (3) but does not play a role in the following development of the method. |
ICLR | Title
Intention Propagation for Multi-agent Reinforcement Learning
Abstract
A hallmark of an AI agent is to mimic human beings to understand and interact with others. In this paper, we propose a collaborative multi-agent reinforcement learning algorithm to learn a joint policy through the interactions over agents. To make a joint decision over the group, each agent makes an initial decision and tells its policy to its neighbors. Then each agent modifies its own policy properly based on received messages and spreads out its plan. As this intention propagation procedure goes on, we prove that it converges to a mean-field approximation of the joint policy with the framework of neural embedded probabilistic inference. We evaluate our algorithm on several large scale challenging tasks and demonstrate that it outperforms previous state-of-the-arts.
1 INTRODUCTION
Collaborative multi-agent reinforcement learning is an important sub-field of the multi-agent reinforcement learning (MARL), where the agents learn to coordinate to achieve joint success. It has wide applications in traffic control (Kuyer et al., 2008), autonomous driving (Shalev-Shwartz et al., 2016) and smart grid (Yang et al., 2018). To learn a coordination, the interactions between agents are indispensable. For instance, humans can reason about other’s behaviors or know other peoples’ intentions through communication and then determine an effective coordination plan. However, how to design a mechanism of such interaction in a principled way and at the same time solve the large scale real-world applications is still a challenging problem.
Recently, there is a surge of interest in solving the collaborative MARL problem (Foerster et al., 2018; Qu et al., 2019; Lowe et al., 2017). Among them, joint policy approaches have demonstrated their superiority (Rashid et al., 2018; Sunehag et al., 2018; Oliehoek et al., 2016). A straightforward approach is to replace the action in the single-agent reinforcement learning by the joint action a = (a1, a2, ..., aN ), while it obviously suffers from the issue of the exponentially large action space. Thus several approaches have been proposed to factorize the joint action space to mitigate such issue, which can be roughly grouped into two categories:
• Factorization on policy. This approach explicitly assumes that π(a|s) := ∏N i=1 πi(ai|s), i.e.,
policies are independent (Foerster et al., 2018; Zhang et al., 2018). To mitigate for the instability issue caused by the independent learner, it generally needs a centralized critic. • Factorization on value function. This approach has a similar spirit but factorizes the joint value function into several utility functions, each just involving the actions of one agent (Rashid et al., 2018; Sunehag et al., 2018).
However, these two approaches lack of the interactions between agents, since in their algorithms agent i does not care about the plan of agent j. Indeed, they may suffer from a phenomenon called relative over-generalization in game theory observed by Wei & Luke (2016); Castellini et al. (2019); Palmer et al. (2018). Approaches based on the coordinate graph would effectively prevent such cases, where the value function is factorized as a summation of utility function on pairwise or local joint action (Guestrin et al., 2002; Böhmer et al., 2020). However, they only can be applied in discrete action, small scale game.
Furthermore, despite the empirical success of the aforementioned work in certain scenarios, it still lacks theoretical insight. In this work, we only make a simple yet realistic assumption: the reward function ri of each agent i just depends on its individual action and the actions of its neighbors (and
state), i.e., ri(s,a) = ri(s, ai, aNi), (1)
where we use Ni to denote the neighbors of agent i, s to denote the global state. It says the goal or decision of agent is explicitly influenced by a small subset Ni of other agents. Note that such an assumption is reasonable in lots of real scenarios. For instance,
• The traffic light at an intersection makes the decision on the phase changing mainly relying on the traffic flow around it and the policies of its neighboring traffic light. • The main goal of a defender in a soccer game is to tackle the opponent’s attacker, while he rarely needs to pay attention to opponent goalkeeper’s strategy.
Based on the assumption in equation 1, we propose a principled multi-agent reinforcement learning algorithm in the framework of probabilistic inference, where the objective is to maximize the long term reward of the group, i.e., ∑∞ t=0 ∑N i=1 γ trti ( see details in section 4).
Note since each agent’s reward depends on its neighbor, we still need a joint policy to maximize the global reward through interactions. In this paper, we derive an iterative procedure for such interaction to learn the joint policy in collaborative MARL and name it intention propagation. Particularly,
• In the first round, each agent i makes an independent decision and spreads out his plan µ̃i(we name it intention) to neighbors. • In the second round, agents i changes its initial intention properly based on its neighbors’ intention µ̃j , j ∈ Ni and propagates its intention µ̃i again. • In the third round, it changes the decision in the second round with a similar argument. • As this procedure goes on, we show the final output of agents’ policy converges to the mean field
approximation (the variational inference method from the probabilistic graphical model (Bishop, 2006)) of the joint policy.
In addition, this joint policy has the form of Markov Random Field induced by the locality of the reward function (proposition 1). Therefore, such a procedure is computationally efficient when the underlying graph is sparse, since in each round, each agent just needs to care about what its neighbors intend to do. Remark: (1) Our work is not related to the mean-field game (MFG) (Yang et al., 2018). The goal of the MFG is to find the Nash equilibrium, while our work aims to the optimal joint policy in the collaborative game. Furthermore, MFG generally assumes agents are identical and interchangeable. When the number of agents goes to infinity, MFG can view the state of other agents as a population state distribution. In our problem, we do not have such assumptions.
(2) our analysis is not limited to the mean-field approximation. When we change the message passing structure of intention propagation, we can show that it converges to other approximation of the joint policy, e.g., loopy belief propagation in variational inference (Yedidia et al., 2001) (see Appendix B.2 ).
Contributions: (1) We propose a principled method named intention propagation to solve the joint policy collaborative MARL problem; (2) Our method is computationally efficient, which can scale up to one thousand agents and thus meets the requirement of real applications; (3) Empirically, it outperforms state-of-the-art baselines with a wide margin when the number of agents is large; (4) Our work builds a bridge between MARL and neural embedded probabilistic inference, which would lead to new algorithms beyond intention propagation.
Notation: sti and ati represent the state and action of agent i at time step t. The neighbors of agent i are represented asNi. We denote X as a random variable with domain X and refer to instantiations of X by the lower case character x. We denote a density on X by p(x) and denote the space of all such densities as by P .
2 RELATED WORK
We first discuss the work of the factorized approaches on the joint policy. COMA designs a MARL algorithm based on the actor-critic framework with independent actors πi(ai|s), where the joint policy is factorized as π(a|s) = ∏N i=1 πi(ai|s) (Foerster et al., 2018). MADDPG considers a MARL with the cooperative or competitive setting, where it creates a critic for each agent (Lowe et al., 2017). Other similar works may include (de Witt et al., 2019; Wei et al., 2018). Another way is to factorize the value functions into several utility functions. Sunehag et al. (2018) assumes that the
overall Q function can be factorized as Q(s, a1, a2, .., aN ) = ∑N i=1Qi(si, ai) . QMIX extends this work to include a richer class of function, where it assumes the overall Q function is a monotonic function w.r.t. each Qi(si, ai) (Rashid et al., 2018). Similarly, Son et al. (2019) further relax the structure constraint on the joint value function. However these factorized methods suffer from the relative overgeneralization issue (Castellini et al., 2019; Palmer et al., 2018). Generally speaking, it pushes the agents to underestimate a certain action because of the low rewards they receive, while they could get a higher one by perfectly coordinating.
A middle ground between the (fully) joint policy and the factorized policy is the coordination graph (Guestrin et al., 2002), where the value function is factorized as a summation of the utility function on the pairwise action. Böhmer et al. (2020); Castellini et al. (2019) combine deep learning techniques with the coordination graph. It addresses the issue of relative overgeneralization, but still has two limitations especially in the large scale MARL problem. (1) The max-sum algorithm can just be implemented in the discrete action space since it needs a max-sum operation on the action of Q function. (2) Even in the discrete action case, each step of the Q learning has to do several loops of max-sum operation over the whole graph if there is a cycle in graph. Our algorithm can handle both discrete and continuous action space cases and alleviate the scalability issue by designing an intention propagation network.
Another category of MARL is to consider the communication among agents. The attention mechanism is used to decide when and who to communicate with (Das et al., 2018). Foerster et al. (2016) propose an end-to-end method to learn communication protocol. In (Liu et al., 2019; Chu et al., 2020), each agent sends the action information to it neighbors. In addition, Chu et al. (2020) require a strong assumption that the MDP has the spatial-temporal Markov property. However, they utilizes neighbor’s action information in a heuristic way and thus it is unclear what the agents are learning (e.g., do they learn the optimal joint policy to maximize the group reward? ). Jiang et al. (2020) propose DGN which uses GNN to spread the state embedding information to neighbors. However each agent still uses an independent Q learning to learn the policy and neglects other agents’ plans. In contrast, we propose a principled algorithm, where each agent makes decision considering other agents’ plan. Such procedure can be parameterized by GNN and other neural networks (see section 4.1 and appendix B.2). We prove its convergence to the solution of variational inference methods.
3 BACKGROUNDS
Probabilistic Reinforcement Learning: Probabilistic reinforcement learning (PRL) (Levine, 2018) is our building block. PRL defines the trajectory τ up to time step T as τ = [s0, a0, s1, a1, ..., sT , aT , sT+1]. The probability distribution of the trajectory τ induced by the optimal policy is defined as p(τ) = [p(s0) ∏T t=0 p(s t+1|st, at)] exp (∑T t=0 r(s t, at) ) . While the probability of the trajectory τ under the policy π(a|s) is defined as p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st, at)π(at|st). The objective is to minimize the KL divergence between p̂(τ) and p(τ). It is equivalent to the maximum entropy reinforcement learning
max π J(π) = T∑ t=0 E[r(st, at) +H(π(at|st))],
where it omits the discount factor γ and regularizer factor α of the entropy term, since it is easy to incorporate them into the transition and reward respectively. Notice in this framework the max operator in the Bellman optimality equation would be replaced by the softmax operator and thus its optimal policy is a softmax function related to the Q function (Haarnoja et al., 2017). Such framework subsumes state-of-the-art algorithms such as soft-actor-critic (SAC) (Haarnoja et al., 2018). In each iteration, SAC optimizes the following loss function of Q,π, V , and respectively.
E(st,at)∼D [ Q(st, at)− r(st, at)− γEst+1∼p[V (st+1)] ]2 ,Est∼DEat∼π[log π(at|st)−Q(st, at)]
Est∼D [ V (st)− Eat∼πθ [Q(st, at)− log π(at|st)] ]2 ,where D is the replay buffer.
Function Space Embedding of Distribution: In our work, we use the tool of embedding in Reproducing Kernel Hilbert Space (RKHS) to design an intention propagation procedure (Smola et al., 2007). We let φ(X) be an implicit feature mapping and X be a random variable with distribution p(x). Embeddings of p(x) is given by µX := EX [φ(X)] = ∫ φ(x)p(x)dx where the distribution is mapped to its expected feature map. By assuming that there exists a feature space such that
the embeddings are injective, we can treat the embedding µX of the density p(x) as a sufficient statistic of the density, i.e., any information we need from the density is preserved in µX (Smola et al., 2007). Such injective assumption generally holds under mild condition (Sriperumbudur et al., 2008). This property is important since we can reformulate a functional f : P → R of p(·) using the embedding only, i.e., f(p(x)) = f̃(µX). It also can be generalized to the operator case. In particular, applying an operator T : P → Rd to a density can be equivalently carried out using its embedding T ◦ p(x) = T̃ ◦ µX , where T̃ : F → Rd is the alternative operator working on the embedding. In practice, µX , f̃ and T̃ have complicated dependence on φ. As such, we approximate them by neural networks, which is known as the neural embedding approach of distribution (Dai et al., 2016).
4 OUR METHOD
In this section, we present our method intention propagation for the collaborative multi-agent reinforcement learning. To begin with, we formally define the problem as a networked MDP. The network is characterized by a graph G = (V, E), where each vertex i ∈ V represents an agent and the edge ij ∈ E means the communication link between agent i and j. We say i,j are neighbors if they are connected by this edge. The corresponding networked MDP is characterized by a tuple ({Si}Ni=1, {Ai}Ni=1, p, {ri}Ni=1, γ,G), where N is the number of agents, Si is the local state of the agent i andAi denotes the set of action available to agent i. We let S := ∏N i=1 Si andA := ∏N i=1Ai be the global state and joint action space respectively. At time step t+1, the global state st+1 ∈ S is drawn from the transition st+1 ∼ p(·|st,at), conditioned on the current state st and the joint action at = (at1, a t 2, ..., a t N ) ∈ A. Each transition yields a reward rti = ri(st,at) for agent i and γ is the discount factor. The aim of our algorithm is to learn a joint policy π(at|st) to maximize the overall long term reward (with an entropy termH(·|s) on the joint action a)
η(π) = E[ ∞∑ t=0 γt( N∑ i=1 rti +H(·|st))],
where each agent i can just observe its own state si and the message from the neighborhood communication. We denote the neighbors of agent i asNi and further assume that the reward ri depends on the state and the actions of itself and its neighbors, i.e., ri(s,a) := ri(s, ai, aNi). Such assumption is reasonable in many real scenarios as we discussed in the introduction. In the following, we start the derivation with the fully observation case, and discuss how to handle the partial observation later. The roadmap of the following derivation : At the beginning, we prove that the optimal policy has a Markov Random Field (MRF) form, which reduces the exponential large searching space to a polynomial one. However implement a MRF policy is not trivial in the RL setting (e.g., sample an action from the policy). Thus we sort to the varational inference method (focus on mean field approximation in the main paper and leave other methods in the appendix). But it would introduce complicated computations. At last we apply the kernel embedding method introduced in section 3 to solve this problem and learn the kernel embedding by neural networks. We also discuss how to handle the partially observable setting.
4.1 REDUCE POLICY SEARCHING SPACE
Recall that our aim is to maximize the long term reward with the entropy term. Therefore, we follow the definition of the optimal policy in the probabilistic reinforcement learning in (Levine, 2018) and obtain the proposition 1. It says under the assumption ri(s, a) = ri(s, ai, aNi), the optimal policy is in the form of Markov Random Field (MRF). We prove the following proposition in I.1.
Proposition 1 The optimal policy has the form π∗(at|st) = 1Z exp( ∑N i=1 ψi(s t, ati, a t Ni)), where
Z is the normalization term.
This proposition is important since it suggests that we should construct the policy π(at|st) with this form, e.g., a parametric family, to contain the optimal policy. If agent i and its neighbors compose a clique, the policy reduces to a MRF and ψ is the potential function. One common example is that the reward is a function on pairwise actions, i.e., r(s,a) = ∑ i∈V r(s, ai) + ∑ (i,j)∈E r(s, ai, aj). Then the policy has the form
π(a|s) = 1 Z exp( ∑ i∈V ψ̃i(s, ai) + ∑ (i,j)∈E ψ̃i,j(s, ai, aj)),
which is the pairwise MRF. For instance, in traffic lights control, we can define a 2-D grid network and the pairwise reward function. The MRF formulation on the policy effectively reduces the policy space comparing with the exponentially large one in the fully connected graph.
A straightforward way to leverage such observation is to define a πθ(at|st) as a MRF, and then apply the policy gradient algorithm, e.g., the following way in SAC. ∇θEst∼DEat∼πθ [log πθ(at|st) − Qκ(s
t,at)]. However it is still very hard to sample joint action at from πθ(at|st). In the next section, we resort to embedding to alleviate such problem.
Recall the remaining problem is how to sample the joint action from a MRF policy. Classical ways include the Markov Chain Monte Carlo method and variational inference. The former provides the guarantee of producing exact samples from the target density but computationally intensive. Therefore it is not applicable in the multi-agent RL setting, since we need to sample action once in each interaction with the environment. As such, we advocate the second approach. Here we use the mean-field approximation for the simplicity of presentation and defer more variational inference methods, e.g., loopy belief propagation, in Appendix B.2. We use an intention propagation network with the embedding of the distribution to represent the update rule of the mean field approximation.
Mean field approximation. We hope to approximate the π∗(a|s) by the mean-field variational family pi
min (p1,p2,...,pN ) KL( N∏ i=1 pi(ai|s)||π∗(a|s)),
where we omit the superscript t to simplify the notation. We denote the optimal solution of above problem as qi. Using the coordinate ascent variational inference,the optimal solution qi should satisfy the following fixed point equation (Bishop, 2006). Since the objective function is (generally) non-convex, such update converges to a local optimum (Blei et al., 2017).
qi(ai|s) ∝ exp ∫ ∏
j 6=i
qj(aj |s) log π∗(a|s)da. (2)
For simplicity of the representation, in the following discussion, we assume that the policy is a pairwise MRF but the methodology applies to more general case with more involved expression. Particularly, we assume π∗(a|s) = 1Z exp( ∑ i∈V ψi(s, ai) + ∑ (i,j)∈E ψij(s, ai, aj)). We plug this into equation 2 and obtain following fixed point equation.
log qi(ai|s) = ci + ψi(s, ai) + ∑ j∈Ni ∫ qj(aj |s)ψij(s, ai, aj)daj , (3)
where ci is some constant that does not depend on ai.
We can understand this mean-field update rule from the perspective of intention propagation. Equation 3 basically says each agent can not make the decision independently. Instead its policy qi should depend on the policies of others, particularly the neighbors in the equation. Clearly, if we can construct the intention propagation corresponding to equation 3, the final policy obtained from intention propagation will converge to the mean-field approximation of the joint policy. However we can not directly apply this update in our algorithm, since it includes a complicated integral. To this end , in the next section we resort to the embedding of the distribution qi (Smola et al., 2007) , which maps the distributions into a reproducing kernel Hilbert space.
Embed the update rule. Observe that the fixed point formulation equation 3 says that qi(ai|s) is a functional of neighborhood marginal distribution {qj(aj |s)}j∈Ni , i.e., qi(ai|s) = f(ai, s, {qj}j∈Ni). Denote the d-dimensinoal embedding of qj(aj |s) by µ̃j =∫ qj(aj |s)φ(aj |s)daj . Notice the form of feature φ is not fixed at the moment and will be learned implicitly by the neural network. Following the assumption that there exists a feature space such that the embeddings are injective in Section 3, we can replace the distribution by its embedding and have the fixed point formulation as
qi(ai|s) = f̃(ai, s, {µ̃j}j∈Ni). (4)
For more theoretical guarantee on the kernel embedding, e.g., convergence rate on the empirical mean of the kernel embedding, please refer to (Smola et al., 2007). Roughly speaking, once there
are enough data, we can believe the learned kernel embedding is close enough to the true kernel embedding. Therefore the update of equation 4 and equation 5 in the following would converge to the fixed point of equation 2. Remind that in section 3 at both sides we can do integration w.r.t. the feature map φ, which yields, µ̃i = ∫ qi(ai|s)φ(ai|s)dai = ∫ f̃(ai, s, {µ̃j}j∈Ni)φ(ai|s)dai. Thus we can rewrite it as a new operator on the embedding, which induces a fixed point equation again µ̃i = T̃ ◦ (s, {µ̃j}j∈Ni). In practice, we do this fix-point update with M iterations.
µ̃mi ← T̃ ◦ (s, {µ̃m−1j }j∈Ni) m = 1, ...,M. (5)
Finally, we output the distribution qi with qi(ai|s) = f̃(ai, s, {µ̃Mj }j∈Ni). In next section, we show how to represent these variables by neural network.
Parameterization by Neural Networks. In general f̃ and T̃ have complicated dependency on ψ and φ. Instead of learning such dependency, we directly approximate f̃ and T̃ by neural networks. For instance, we can represent the operator T̃ in equation 5 by µ̃i = σ(W1s+W2 ∑ j∈Ni µ̃j), where σ is a nonlinear activation function, W1 and W2 are some matrixes with row number equals to d. Interestingly, this is indeed a message passing form of Graph Neural Network (GNN) (Hamilton et al., 2017). Thus we can use M -hop (layer) GNN to represent the fixed-point update in equation 5. If the action space is discrete, the output qi(ai|s) is a softmax function. In this case f̃ is a fully connected layer with softmax output. When it is continuous, we can output a Gaussian distribution with the reparametrization trick (Kingma & Welling, 2019). We denote this intention propagation procedure as intention propagation network Λθ(a|s) with parameter θ in Figure 1(b). Figure 1(a) illustrates the graph and the message passing procedure. Agent 1 receives the embedding (intention) µ̃m−12 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ and spreads its new embedding µ̃m1 at the next iteration. Figure 1(b) gives the details on the parameterization of GNN. Here we use agent 1 as an example. To ease the exposition, we assume agent 1 just has one neighbor, agent 2. Each agent observes its own state si. After a MLP and softmax layer (we do not sample actions here, but just use the probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. Then agent 1 receives the embedding µ̃02 of its neighbor (agent 2). After a GNN layer to combine the information, e.g, µ̃ 1 1 = Relu[W1(s1 + s2) +W2(µ̃ 0 1 + µ̃ 0 2)](W1,W2 are shared across all agents as that in GNN), we obtain new embedding µ̃11 of agent 1. Notice we also do message passing on state, since in practice the global state is not available. In the second layer, we do similar things. We defer detailed discussion and extension to other neural networks to Appendix B due to space constraint.
4.2 ALGORITHM
We are ready to give the overall algorithm by combining all pieces together. All detailed derivation on Vi, Qi for agent i and the corresponding loss function will be given in the appendix I, due to the space constraint. Recall we have a mean-field approximation qi of the joint-policy, which is obtained by M iterations of intention propagation. We represent this procedure by a M-hop graph neural network with parameter θ discussed above. Notice that this factorization is different from the case π(a|s) = ∏N i=1 π(ai|s) in (Zhang et al., 2018; Foerster et al., 2018), since qi(ai|s) depends on the information of other agents’ plan. Using the mean field approximation qi, we can further decompose Q = ∑N i=1Qi and V = ∑N i=1 Vi, see appendix I. We use neural networks to approximate Vi and Qi function with parameter ηi and κi respectively. As that in TD3 (Fujimoto et al., 2018), for each agent i we have a target value network Vη̄i and two Qκi functions to mitigate the overestimation by training them simultaneously with the same data but only selecting minimum of them as the
target in the value update. In the following we denote qi(ai|s) as qi,θ(ai|s) to explicitly indicate its dependence on the intention propagation network Λθ. We use D to denote the replay buffer. The whole algorithm is presented in Algorithm 1.
Loss Functions. The loss of value function Vi:
J(ηi) = Est∼D[ 1
2
( Vηi(s
t)− E(ati,atNi )∼(qi,qNi )[Qκi(s t, ati, a t Ni)− log qi,θ(a t i|st)]
)2 ].
The loss of Qi: J(κi) = E(st,ati,atNi )∼D[ 1 2
( Qκi(s t, ati, a t Ni)− Q̂i(s t, ati, a t Ni) )2 ],
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(·|st,at)[Vη̄i(s t+1)].
The loss of policy: J(θ) = Est∼D,at∼∏Ni=1 qi [ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)].
It is interesting to compare the loss with the counterpart in the single agent SAC in section 3.
• qi,θ(ai|s) is the output of intention propagation network Λθ(a|s) parameterized by a graph neural network. Thus it depends on the policy of other agents. • Qκi depends on the action of itself and its neighbors, which can also be accomplished by the graph neural network in practice.
Algorithm 1 Intention Propagation Inputs: Replay buffer D. Vi, Qi for each agent i. Intention propagation network Λθ(at|s) with outputs {qi,θ}Ni=1. Learning rate lη, lκ,lθ. Moving average parameter τ for the target network for each iteration do
for each environment step do sample at ∼ ∏ qi,θ(a t i|st) from the intention propagation network. st+1 ∼ p(st+1|st,at),
D ← D ⋃(
sti, a t i, r t i , s t+1 i )N i=1
end for for each gradient step do
update ηi, κi, θ, η̄i. ηi ← ηi − lη∇J(ηi), κi ← κi − lκ∇J(κi) θ ← θ − lθ∇J(θ), η̄i ← τηi + (1− τ)η̄i
end for end for
Handle the Partial Observation: So far, we assume that agents can observe global state while in practice, each agent just observes its own state si. Thus besides the communication with the intention propagation, we also do the message passing on the state embedding with the graph neural network. The idea of this local state sharing is similar to (Jiang et al., 2020), while the whole structure of our work is quite different from (Jiang et al., 2020). See the discussion in the related work.
5 EXPERIMENT
In this section, we evaluate our method and eight state-of-the-art baselines on more than ten different scenarios from three popular MARL platforms: (1) CityFlow, a traffic signal control environment
(Tang et al., 2019). It is an advanced version of SUMO (Lopez et al., 2018) widely used in MARL community. (2) multiple particle environment (MPE) (Mordatch & Abbeel, 2017) and (3) grid-world platform MAgent (Zheng et al., 2018). Our intention propagation (IP) empirically outperforms all baselines on all scenarios especially on the large scale problem.
5.1 SETTINGS
We give a brief introduction to the settings of the experiment and defer the details such as hyperparameter tuning of intention propagation and baselines to appendix D. Notice all algorithms are tested in the partially observable setting, i.e., each agent just can observe its own state si.
In traffic signal control problem (Left panel in Figure 2), each traffic light at the intersection is an agent. The goal is to learn policies of traffic lights to reduce the average waiting time to alleviate the traffic jam. Graph for cityflow: graph is a 2-D grid induced by the map (e.g. Figure 2). The roads are the edges which connects the agents. We can define the cost −ri as the traveling time of vehicle around the intersection i, thus the total cost indicates the average traveling time. Obviously, ri has a close relationship with the action of neighbors of agent i but has little dependence on the traffic lights far away. Therefore our assumption on reward function holds. We evaluate different methods on both real-world and synthetic traffic data under the different numbers of intersections.
MPE (Mordatch & Abbeel, 2017) and MAgent (Zheng et al., 2018) (Figure 2) are popular particle environments on MARL (Lowe et al., 2017; Jiang et al., 2020). Graph for particle environments : for each agent, it has connections (i.e., the edge of the graph) with k nearest neighbors. Since the graph is dynamic, we update the adjacency matrix of the graph every n step, e.g., n = 5 steps. It is just a small overhead comparing with the training of the neural networks. The reward functions also have local property, since they are explicitly or implicitly affected by the distance between agents. For instance, in heterogeneous navigation, if small agents collide with big agents, they will obtain a large negative reward. Thus their reward depends on the action of the nearby agents. Similarly, in the jungle environment, agent can attack the agents nearby to obtain a high reward.
Baselines. We compare our method against eight different baselines mentioned in introduction and related work section: QMIX (Rashid et al., 2018); MADDPG (Lowe et al., 2017); permutation invariant critic (PIC) (Liu et al., 2019); graph convolutional reinforcement learning (DGN) (Jiang et al., 2020); independent Q-learning (IQL) (Tan, 1993); permutation invariant MADDPG with data shuffling mechanism (MADDPGS); COMA (Foerster et al., 2018); MFQ (Yang et al., 2018). These baselines are reported as the leading algorithm of solving tasks in CityFlow, MPE and MAgent. Among them, DGN and MFQ need the communication with neighbors in the training and execution. Also notice that PIC assumes the actor can observe the global state. Thus in the partially observable setting, each agent in PIC also needs to communicate to get the global state information in the training and the execution. Further details on baselines are given in appendix E.1.
Neural Network and Parameters. Recall the intention propagation network is represented by GNN. In our experiment, our graph neural network has hop = 2 (2 GNN layers, i.e., M = 2) and 1 fully-connected layer at the top. Each layer contains 128 hidden units. Other hyperparameters are listed in appendix H.
5.2 COMPARISON TO STATE-OF-THE-ART
In this section, we compare intention propagation (IP) with other baselines. The experiments are evaluated by average episode reward (Lowe et al., 2017). For CityFlow tasks, average reward refers
to negative average travel time. All experiments are repeated for 5 runs with different random seeds. We report the mean and standard deviation in the curves. We report the results on six experiments and defer all the others to appendix G due to the limit of space.
CityFlow. We first evaluate our algorithm on traffic control problem. Particularly, we increase the number of intersections (agents) gradually to increase the difficulties of the tasks. Figure 3 presents the performance of different methods on both real-world and synthetic CityFlow data with different number of intersections. On the task of Manhattan City, intention propagation (IP) method, the baseline methods PIC and DGN achieve better reward than the other methods while our method approaches higher reward within fewer steps. On the larger task (N=100), both PIC and DGN have large variance and obtain poor performance. The experiment with N=1225 agents is an extremely challenging task. Our algorithm outperforms all baselines with a wide margin. The runner-up is MADDPG with data shuffling mechanism. Its final performance is around −4646 and suffers from large variance. In contrast, the performance of our method is around −569 (much higher than the baselines). It’s clear that, in both real-world and synthetic cityflow scenarios, the proposed IP method obtains the best performance. We defer further experimental results to appendix G.
MPE and MAgent. Figure 4 demonstrates the performance of different methods on other three representative scenario instances: a small task cooperative navigation (N=30) and two large-scale tasks heterogeneous navigation (N=100) and prey and predator (N=100). We run all algorithms long enough (more than 1e6 steps). In all experiments, our algorithm performs best. For cooperative navigation, MADDPGS performs better than MADDPG. The potential improvement comes from data-shuffling mechanism, which makes MADDPGS more robust to handle the manually specified order of agents. QMIX performs much better than MADDPG, MADDPGS and IQL. However, its performance is not stable even on small setting (N=30). DGN is better and more stable than QMIX. However, when solving large-scale settings, its performance is much worse than PIC and our intention propagation (IP). Although PIC can solve large-scale tasks, our IP method is still much better. In prey and predator, there are two groups of agents: good agents and adversaries. To make a fair comparison of rewards of different methods, we fix good agents’ policies and use all the methods to learn the adversaries’ policies. Such setting is commonly used in many articles (Lowe et al., 2017; Liu et al., 2019).
Stability. Stability is a key criterion to evaluate MARL. In all experiments, our method is quite stable with small variance. For instance, as shown in Figure 3 (b), DGN approaches −1210± 419 on the CityFlow scenario with N=100 intersections while our method approaches −465 ± 20 after 1.6× 106 steps (much better and stable). The reason is that to make the joint decision, the agent in our algorithm can adjust its own policy properly by considering other agents’ plans.
Ablation Study: We conduct a set of ablation studies related to the effect of joint policy, graph, hop size, number of neighbors and the assumption of the reward function. Particularly, we find the joint policy is essential for the good performance. In Cityflow, the performance of traffic graph (2-d grid induced by the roadmap) is better than the fully connected graph. In MPE and MAgent, We define the adjacent matrix based on the k nearest neighbors and pick k = 8 in large scale problem and k = 4 in small scale problem. In all of our experiment, we choose the 2-hop GNN. Because of the limitation of space, we just summarize our conclusion here and place the details in appendix F.
A ORGANIZATION OF THE APPENDIX
In appendix B, we give the details on the intention propagation network and parameterization of the GNN. We explain intention propgation from the view of the MARL. At last, we extend the intention propagation to other approximations which converges to other solutions of the variational inference. Notice such extension on the algorithm can also be easily parameterized by neural networks.
In Appendix C, we give the details of the algorithm deferred from the main paper. Appendix D summarizes the configuration of the experiment and MARL environment. Appendix E gives more details on baselines and the hyperparameters of GNN used in our model. Appendix F conducts the ablation study deferred from the main paper. Appendix G and H give more experimental results and hyperparameters used in the algorithms. At appendix I, we derive the algorithm and prove the proposition 1.
B INTENTION PROPAGATION NETWORK
B.1 DETAILS ON THE INTENTION PROPAGATION NETWORK
In this section, we give the details on the intention propagation network deferred from the main paper. We first illustrate the message passing of the intention propagation derived in section 4.1. Then we give a details on how to construct graph neural network.
Message passing and explanation from the view of MARL: µ̃i is the embedding of policy of agent i, which represents the intention of the agent i. At 0 iteration, every agent makes independent decision. The policy of agent i is mapped into its embedding µ̃0i . We call it the intention of agent i at iteration 0. Then agent i sends its plan to its neighbors . In Figure 5, µ̃mi is the d dimensional (d = 3 in this figure) embedding of qi at m−th iteration of intention propagation. We draw the update of µ̃(m)1 as example. Agent 1 receives the embedding (intention) µ̃ m−1 2 , µ̃ m−1 5 , µ̃ m−1 6 from its neighbors, and then updates the its own embedding with operator T̃ . After M iterations, we obtain µ̃M1 and output the policy distribution q1 using equation 4. Similar procedure holds for other agents. At each RL step t, we do this procedure (with M iterations) once to generate joint policy. M in general is small, e.g., M = 2 or 3. Thus it is efficient.
Parameterization on GNN: We then illustrate the parameterization of graph neural network in Figure 6. If the action space is discrete, the output qi(ai|s) is a softmax function. When it is continuous, we can output a Gaussian distribution (mean and variance) with the reparametrization trick (Kingma & Welling, 2019). Here, we draw 2-hop (layer) GNN to parameterize it in discrete action intention propagation. In Figure 6 (b), each agent observe its own state si. After a MLP and softmax layer (we do not sample here, and just use the output probabilities of the actions), we get a embedding µ̃0i , which is the initial distribution of the policy. In the following, we use agent 1 as an example. To ease the exposition, we assume Agent 1 just has one neighbor, agent 2. Agent 1 receives the embedding µ̃02 of its neighbor. After a GNN layer to combine the information, e.g, Relu[W1(s1 + s2) + W2(µ̃ 0 1 + µ̃ 0 2)], we obtain new embedding µ̃ 1 1 of agent 1. Notice we also do
message passing on state, since in practice the global state is not available. In the second layer, we do similar things. Agent 1 receives the embedding information of µ̃12 from its neighbors and get a new embedding µ̃21. Then this embedding passes a MLP+softmax layer and output probability of action, i.e. q1(a1|s).
B.2 EXTENSION TO OTHER VARIATIONAL INFERENCE METHODS AND NEURAL NETWORKS
In this section, we show how to approximate the joint policy with the Loopy Belief Propagation in the variational inference (Yedidia et al., 2001). This will lead to a new form of neural networks beyond vanilla GNN that we illustrate above.
The objective function in Loop Belief Propagation is the Beth Free energy (Yedidia et al., 2001). Different from the mean-field approximation, it introduces another variational variable qij , which brings more flexibility on the approximation. The following is objective function in our case.
min qi,qij∈E − ∑ i (|Ni| − 1) ∫ qi(ai|s) log qi(ai|s) ψi(s, ai) dai
+ ∑ ij ∫ qij(ai, aj |s) log qij(ai, aj |s) ψij(s, ai, aj)ψi(s, ai)ψj(s, aj) daidaj .
s.t. ∫ qij(ai, aj |s)daj = qi(aj |s), ∫ qij(ai, aj |s)dai = qj(aj |s)
(6)
Solve above problem, we have the fixed point algorithm mij(aj |s)← ∫ ∏
k∈Ni\j
mki(ai|s)ψi(s, ai)ψij(s, ai, aj)dai,
qi(ai|s)← ψi(s, ai) ∏ j∈Ni mji(ai|s).
Similar to the mean-field approximation case, we have mij(aj |s) = f(aj , s, {mki}k∈Ni\j), qi(ai|s) = g(ai, s, {mki}k∈Ni),
It says the message mij and marginals qi are functionals of messages from neighbors. Denote the embedding ν̃ij = ∫ ψj(s, aj)mij(aj |s)daj and µ̃i = ∫ ψi(s, ai)qi(ai|s)dai, we have
ν̃ij = T̃ ◦ ( s, {ν̃ki}k∈Ni\j ) , µ̃i = T̃ ◦ ( s, {ν̃ki}k∈Ni ) .
Again, we can parameterize above equation by (graph) neural network ν̃ij = σ ( W1s +
W2 ∑ k∈Ni\j ν̃ki ) , µ̃i = σ ( W3s+W4 ∑ k∈Ni ν̃ki ) .
Following similar way, we can derive different intention propagation algorithms by changing different objective function which corresponds to e.g., double-loop belief propagation(Yuille, 2002), tree-reweighted belief propagation (Wainwright et al., 2003) and many others.
C ALGORITHM
We present some remarks of the algorithm Intention Propagation (algorithm 1) deferred from the main paper.
Remark: To calculate the loss function J(ηi), each agent need to sample the global state and (ai, aNi). Thus we first sample a global state from the replay buffer and then sample all action a once using the intention propagation network.
D FURTHER DETAILS ABOUT ENVIRONMENTS AND EXPERIMETAL SETTING
Table 1 summarizes the setting of the tasks in our experiment.
D.1 CITYFLOW
CityFlow (Tang et al., 2019) is an open-source MARL environment for large-scale city traffic signal control 1. After the traffic road map and flow data being fed into the simulators, each vehicle moves from its origin location to the destination. The traffic data contains bidirectional and dynamic flows with turning traffic. We evaluate different methods on both real-world and synthetic traffic data. For real-world data, we select traffic flow data from Gudang sub-district, Hangzhou, China and Manhattan, USA 2. For synthetic data, we simulate several different road networks: 7 × 7 grid network (N = 49) and large-scale grid networks with N = 10 × 10 = 100 , 15 × 15 = 225, 35 × 35 = 1225. Each traffic light at the intersection is the agent. In the real-world setting (Hang Zhou, Manhattan), the graph is a 2-d grid induced by the roadmap. Particularly, the roads are edges which connect the node (agent) of the graph. For the synthetic data, the map is a n ∗ n 2-d grid (Something like Figure 7), where edges represents road, node is the traffic light. We present the experimental results deferred from the main paper in Figure 10.
D.2 MPE
In MPE (Mordatch & Abbeel, 2017) 3, the observation of each agent contains relative location and velocity of neighboring agents and landmarks. The number of visible neighbors in an agent’s observation is equal to or less than 10. In some scenarios, the observation may contain relative location and velocity of neighboring agents and landmarks.
1https://github.com/cityflow-project/CityFlow 2We download the maps from https://github.com/traffic-signal-control/ sample-code. 3To make the environment more computation-efficient, Liu et al. (2019) provided an improved version of MPE. The code are released in https://github.com/IouJenLiu/PIC.
We consider four scenarios in MPE. (1) cooperative navigation: N agents work together and move to cover L landmarks. If these agents get closer to landmarks, they will obtain a larger reward. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and N agents. The observation dimension is 26. (2) prey and predator: N slower cooperating agents must chase the faster adversaries around a randomly generated environment with L large landmarks. Note that, the landmarks impede the way of all agents and adversaries. This property makes the scenario much more challenging. In this scenario, the agent observes its location and velocity, and the relative location of the nearest 5 landmarks and 5 preys. The observation dimension is 34. (3) cooperative push: N cooperating agents are rewarded to push a large ball to a landmark. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 28. (4) heterogeneous navigation: this scenario is similar with cooperative navigation except dividing N agents into N2 big and slow agents and N 2 small and fast agents. If small agents collide with big agents, they will obtain a large negative reward. In this scenario, each agent can observe 10 nearest agents and 5 nearest landmarks. The observation dimension is 26.
Further details about this environment can be found at https://github.com/IouJenLiu/ PIC.
D.3 MAGENT
MAgent (Zheng et al., 2018) is a grid-world platform and serves another popular environment platform for evaluating MARL algorithms. Jiang et al. (2020) tested their method on two scenarios: jungle and battle. In jungle, there are N agents and F foods. The agents are rewarded by positive reward if they eat food, but gets higher reward if they attack other agents. This is an interesting scenario, which is called by moral dilemma. In battle, N agents learn to fight against several enemies, which is very similar with the prey and predator scenario in MPE. In our experiment, we evaluate our methods on jungle.
In our experiment, the size for the grid-world environment is 30 × 30. Each agent refers to one grid and can observe 11 × 11 grids centered at the agent and its own coordinates. The actions includes moving and attacking along the coordinates. Further details about this environment can be found at https://github.com/geek-ai/MAgent and https://github.com/ PKU-AI-Edge/DGN.
E FURTHER DETAILS ON SETTINGS
E.1 DESCRIPTION OF OUR BASELINES
We compare our method with multi-agent deep deterministic policy gradient (MADDPG) (Lowe et al., 2017), a strong actor-critic algorithm based on the framework of centralized training with decentralized execution; QMIX (Rashid et al., 2018), a q-learning based monotonic value function factorisation algorithm; permutation invariant critic (PIC) (Liu et al., 2019), a leading algorithm on MPE yielding identical output irrespective of the agent permutation; graph convolutional reinforcement learning (DGN) (Jiang et al., 2020), a deep q-learning algorithm based on deep convolutional graph neural network with multi-head attention, which is a leading algorithm on MAgent; independent Q-learning (IQL) (Tan, 1993), decomposing a multi-agent problem into a collection of simultaneous single-agent problems that share the same environment, which usually serves as a surprisingly strong benchmark in the mixed and competitive games (Tampuu et al., 2017). In homogeneous settings, the input to the centralized critic in MADDPG is the concatenation of all agent’s observations and actions along the specified agent order, which doesn’t hold the property of permutation invariance. We follow the similar setting in (Liu et al., 2019) and shuffle the agents’ observations and actions in training batch 4. In COMA (Foerster et al., 2018), it directly assume the poilcy is factorized. It calculates the counterfactual baseline to address the credit assignment problem in MARL. In our experiment, since we can observe each reward function, each agent can directly approximate the Q function without counterfactual baseline. MFQ derives the algorithm from the view of mean-field game(Yang et al., 2018). Notice the aim of mean-field game is to find the Nash equilibrium rather
4This operation doesn’t change the state of the actions.
than maxmization of the total reward of the group. Further more, it needs the assumption that agents are identical.
E.2 NEURAL NETWORKS ARCHITECTURE
To learn feature from structural graph build by the space distance for different agents, we design our graph neural network based on the idea of a strong graph embedding tool structure2vec (Dai et al., 2016), which is an effective and scalable approach for structured data representation through embedding latent variable models into feature spaces. Structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. After using M graph neural network layers, each node can receive the information fromM -hops neighbors by message passing. Recently, attention mechanism empirically leads to more powerful representation on graph data (Veličković et al., 2017; Jiang et al., 2020). We employ this idea into our graph neural network. In some settings, such as heterogeneous navigation scenario from MPE, the observations of different group of agents are heterogeneous. To handle this issue, we use different nonlinear functions to extract the features from heterogeneous observations and map the observations into a latent layer, then use the same graph neural networks to learn the policy for all types of agents. In our experiment, our graph neural network has M = 2 layers and 1 fully-connected layer at the top. Each layer contains 128 hidden units.
F ABLATION STUDIES
F.1 INDEPENDENT POLICY VS INTENTION PROPAGATION.
We first give a toy example where the independent policy (without communication) fails. To implement such algorithm, we just replace the intention propagation network by a independent policy network and remain other parts the same. Think about a 3× 3 2d-grid in Figure 7 where the global state (can be observed by all agents) is a constant scalar (thus no information). Each agent chooses an action ai = 0 or 1. The aim is to maximize a reward−(a1−a2)2−(a1−a4)2−(a2−a3)2− ...− (a8−a9)2, (i.e., summation of the reward function on edges). Obviously the optimal value is 0. The optimal policy for agents is a1 = a2 =, ..., a9 = 0 or a1 = a2 =, ..., a9 = 1. However independent policy fails, since each agents does not know how its allies pick the action. Thus the learned policy is random. We show the result of this toy example in Figure 7, where intention propagation learns optimal policy.
F.2 GRAPH TYPES, NUMBER OF NEIGHBORS, AND HOP SIZE
We conduct a set of ablation studies related to graph types, the number of neighbors, and hop size. Figure 8(a) and Figure 8(b) demonstrate the performance of our method on traffic graph and fullyconnected graph on the scenarios (N=49 and N=100) of CityFlow. In the experiment, each agent can only get the information from its neighbors through message passing (state embedding and the policy embedding). The result makes sense, since the traffic graph represents the structure of the
map. Although the agent in the fully connected graph would obtain global information, it may introduce irrelevant information from agents far away.
Figure 8(c) and Figure 8(d) demonstrate the performance under different number of neighbors and hop size on cooperative navigation (N=30) respectively. The algorithm with neighbors=8 has the best performance. Again the the fully connected graph (neighbors=30) may introduce the irrelevant information of the agents far away. Thus its performance is worse than the algorithm with graph constructed by the K-nearest neighbor. In addition the fully connected graph introduces more computations in the training. In Figure 8(d), we increase the hop-size from 1 to 3. The performance of IP with hop=2 is much better than that with hop=1. While IP with hop=3 is just slightly better than that with hop=2. It means graph neural network with hop size =2 has aggregated enough information.
In Figure 8(e), we test the importance of the k-nearest neighbor structure. IP(neighbors=3)+random means that we pick 3 agents uniformly at random as the neighbors. Obviously, IP with K-nearest neighbors outperforms the IP with random graph a lot. In Figure 8(f), we update adjacency matrix every 1, 5, 10 steps. IP(neighbors=8) denotes that we update the adjacency matrix every step, while IP(neighbors=8)+reset(5) and IP(neighbors=8)+reset(10) denote that we update adjacency matrix every 5 and 10 steps respectively. Obviously, IP(neighbors=8) has the best result. IP(neighbors=8)+reset(5) is better than IP(neighbors=8)+reset(10). The result makes sense, since the adjacency matrix is more accurate if the update interval is smaller.
F.3 ASSUMPTION VIOLATION
The aforementioned experimental evaluations are based on the mild assumption: the actions of agents that are far away would not affect the learner because of their physical distance. It would be interesting to see the performance where the assumption is violated. As such, we modify the reward in the experiment of cooperative navigation. In particular, the reward is defined by r = r1 + r2, where r1 encourages the agents to cover (get close to) landmarks and r2 is the log function of the distances between agents (farther agents have larger impact). To make a violation, we let r2 dominate the reward. We conduct the experiments with hop = 1, 2, 3. Figure 9 shows that the rewards obtained by our methods are 4115 ± 21, 4564 ± 22, and 4586 ± 25 respectively. It’s expected in this scenario, since we should use large hop to collect information from the far-away agents.
G FURTHER EXPERIMENTAL RESULTS
For most of the experiments, we run them long enough with 1 million to 1.5 million steps and stop (even in some cases our algorithm does not converge to the asymptotic result), since every experment in MARL may cost several days. We present the results on Cityflow in Figure 10. Figure 11 provides the experimental results on the cooperative navigation instances with N = 15, N = 30 and N = 200 agents. Note that, the instance with N = 200 is a large-scale and challenging multiagents reinforcement learning setting (Chen et al., 2018; Liu et al., 2019), which typically needs several days to run millions of steps. It’s clear that IQL, MADDPG, MADDPG perform well in the small setting (N=15), however, they failed in large-scale instances (N = 30 and N = 200). In the instance withN = 30, MADDPGS performs better than MADDPG. The potential reason is that with the help of shuffling, MADDPGS is more robust to handle the manually specified order of agents. Although QMIX performs well in the instance of N = 15 and N = 30, it has large variances in both settings. DGN using graph convolutional network can hold the property of permutation invariance, it obtains much better performance than QMIX on these two settings. However, it also fails to solve the large-scale settings with N = 200 agents. Empirically, after 1.5 × 106 steps, PIC obtains a large reward (−425085 ± 31259) on this large-scale setting. Despite all these, the proposed intention propagation (IP) approaches −329229 ± 14730 and is much better than PIC. Furthermore, Figure 11 shows the results of different methods on (d) jungle (N=20, F=12) and (e) prey and predator (N=100). The experimental results shows our method can beats all baselines on these two tasks. On the scenario of cooperative push (N=100) as shown in Figure 11(f), it’s clear that DGN, QMIX, IQL, MADDPG and MADDPGS all fail to converge to good rewards after 1.5× 106 environmental steps. In contrast, PIC and the proposed IP method obtain much better rewards than these baselines. Limited by the computational resources, we only show the long-term performance of the best two methods. Figure 11(f) shows that IP is slightly better than PIC in this setting.
G.1 POLICY INTERPRETATION
Explicitly analyzing the policy learned by deep multi-agent reinforcement learning algorithm is a challenging task, especially for the large-scale problem. We follow the similar ideas from (Zheng et al., 2019) and analyze the learned policy on CityFlow in the following way: We select the same period of environmental steps within [210000, 1600000] and group these steps into 69 intervals (each interval contains about 20000 steps). We compute the ratio of vehicle volume on each movement and the sampled action volume from the learned policy (each movement can be assigned to one action according to the internal function in CityFlow). We define the ratio of vehicle volume over all movements as the vehicle volume distribution and define the ratio of the sampled action volume from the learned policy over all movements as the sampled action distribution. It’s expected that a good MARL algorithm will hold the property: these two distributions will very similar over a period of time. Figure 12 reports their KL divergence by intervals. It’s clear that the proposed intention propagation method (IP) obtains the lowest KL divergence (much better than the state-of-the-art baselines). Because KL divergence is not symmetrical metric, we also calculate their Euclidean distances. Specifically, the distance of our method is 0.0271 while DGN is 0.0938 and PIC is 0.0933.
H HYPERPARAMETERS
The parameter on the environment. For the max episode length, we follow the similar settings like that in the baselines (Lowe et al., 2017) . Particularly, we set 25 for MPE and set 100 for CityFlow. For MAgent, we find that setting the max episode length by 25 is better than 100. All the methods share the same setting.
We list the range of hyperparameter that we tune in all baselines and intention propagation. γ : {0.95, 0.98, 0.99, 0.999}, learning rate : {1, 5, 10, 100}×1e-4. activation function: {relu, gelu, tanh}, batch size:{128, 256, 512, 1024}, gradient steps: {1, 2, 4, 8}. Number of hidden units in MLP: {32, 64, 128, 256, 512}, number of layers in MLP:{1, 2, 3} in all experiment. In Qmix, GRU hidden unites are {64, 128}. A fully connected layer is before and after GRU. Hypernetwork and mixing network are both single layer network(64 hidden units with Relu activation from the Qmix paper). The parameter of intention propagation is reported in Table.2.
I DERIVATION
I.1 PROOF OF PROPOSITION 1
We prove the result by induction using the backward view. To see that, plug r(st,at) = ∑N i=1 ri(s t, ati, a t Ni) into the distribution of the optimal policy defined in section 3.
p(τ) = [p(s0) T∏ t=0 p(st+1|st,at)] exp T∑ t=0 N∑ i=1 ri(s t, ati, a t Ni)
Recall the goal is to find the best approximation of π(at|st) such that the trajectory distribution p̂(τ) induced by this policy can match the optimal trajectory probability p(τ). Thus we minimize the KL divergence between them minπDKL(p̂(τ)||p(τ)), where p̂(τ) = p(s0) ∏T t=0 p(s
t+1|st,at)π(at|st). We can do optimization w.r.t. π(at|st) as that in (Levine, 2018) and obtain a backward algorithm on the policy π∗(at|st) (See equation 13 in I.2.)
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (7)
Using the result equation 7, when t = T , the optimal policy is
π∗(aT |sT ) = 1 Z exp( N∑ i=1 ri(s T , aTi , a T Ni)).
Obviously, it satisfies the form π∗(aT |sT ) = 1Z exp( ∑N i=1 ψi(s T , aTi , a T Ni)).
Now suppose from step t+ 1 to T , we have
π∗(at ′ |st ′ ) =
1 Z exp( N∑ i=1 ψi(s t′ , at ′ i , a t′ Ni)) (8)
for t′ = t+ 1, ..., T .
Recall that we have the result
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t Ni)− T∑ t′=t+1 log π∗(at ′ |st ′ )] ) .
(9)
Now plug equation 8 into equation 9, we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 N∑ i=1 ψi(s t′ i , a t′ i , a t′ Ni) + C] ) ,
(10)
where C is some constant related to the normalization term. Thus, we redefine a new term
ψ̃i(s t, at, atNi) = Ep(st+1:T ,at+1:T |st,at) [ T∑ t=t′ ( ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 ψi(s t′ , at ′ , at ′ Ni) )] . (11)
Then obviously π∗(at|st) satisfies the form what we need by absorbing the constant C into the normalization term . Thus we have the result.
I.2 DERIVATION OF THE ALGORITHM
We start the derivation with minimization of the KL divergence KL(p̂(τ)||p(τ)), where p(τ) = [p(s0) ∏T t=0 p(s t+1|st,at)] exp (∑T t=0 ∑N i=1 ri(s t, ati, a t Ni) ) , p̂(τ) =
p(s0) ∏T t=0 p(s t+1|st,at)π(at|st).
KL(p̂(τ)||p(τ)) =Eτ∼p̂(τ) T∑ t=0 ( N∑ i=1 ri(s t, ati, a i Ni)− log π(a t|st) )
= ∑ τ [p(s0) T∏ t=0 p(st+1|st,at)π(at|st)] T∑ t=0 ( N∑ i=1 ri(s t, ati, a t Ni)− log π(a t|st) ) .
(12)
Now we optimize KL divergence w.r.t π(·|st). Considering the constraint ∑ j π(j|st) = 1, we in-
troduce a Lagrangian multiplier λ( ∑|A| j=1 π(j|st) − 1) (Rigorously speaking, we need to consider another constraint that each element of π is larger than 0, but later we will see the optimal value satisfies this constraint automatically). Now we take gradient ofKL(p̂(τ)||p(τ))+λ( ∑|A| j=1 π(j|st)−1) w.r.t π(·|s), set it to zero, and obtain
log π∗(at|st) = Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )]− 1 + λ.
Therefore
π∗(at|st) ∝ exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) .
Since we know ∑ j π(j|st) = 1, thus we have
π∗(at|st) = 1 Z
exp ( Ep(st+1:T ,at+1:T |st,at)[ T∑ t′=t N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− T∑ t′=t+1 log π(at ′ |st ′ )] ) . (13)
For convenience, we define the soft V function and Q function as that in (Levine, 2018), and will show how to decompose them into Vi and Qi later.
V (st+1) := E [ T∑ t′=t+1 N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− log π(a t′ |st ′ )|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + Ep(st+1|st,at)[V (s t+1)]
(14)
Thus V (st) = Eπ[Q(st, at) − log π(at|st)]. The optimal policy π∗(at|st) = exp(Q(s t,at)∫
expQ(st,at)dat by
plugging the definition of Q into equation 13.
Remind in section 4.1, we have approximated the optimal joint policy by the mean field approximation ∏N i=1 qi(ai|s). We now plug this into the definition of equation 14 and consider the discount factor. Notice it is easy to incorporate the discount factor by defining a absorbing state where each transition have (1− γ) probability to go to that state. Thus we have
V (st+1) := E [ T∑ t′=t+1 ( N∑ i=1 ri(s t′ , at ′ i , a t′ Ni)− N∑ i=1 log qi(a t′ i |st ′ ))|st+1 ] ,
Q(st,at) := N∑ i=1 ri(s t, ati, a t Ni) + γEp(st+1|st,at)[V (s t+1)].
(15)
Thus we can further decompose V and Q into Vi and Qi. We define Vi and Qi in the following way.
Vi(s t+1) = E[ T∑ t′=t+1 ( ri(s t′ , at ′ i , a t′ Ni)− log qi(a t′ i |st ′ ) ) |st+1],
Qi(s t, ati, a t Ni) = ri(s t, ati, a t Ni) + γEp(st+1|st,at)[Vi(s t+1)].
Obviously we have V = ∑N i=1 Vi and Q = ∑N i=1Qi.
For Vi, according to our definition, we obtain Vi(s
t) = Eat∼∏Ni=1 qi [ri(st, ati, atNi)− log qi(ati|st) + Ep(st+1|st,at)Vi(st+1)]. (16) Now we relate it to Qi, and have
Vi(s t) = Eat∼∏Ni=1 qi [Qi(sti, ati, atNi)−log qi(ati|st)] = E(ai,aNi)∼(qi,qNi )Qi(sti, ati, atNi)−Eai∼qi log qi(ati|st).
Thus it suggests that we should construct the loss function on Vi and Qi in the following way. In the following, we use parametric family (e.g. neural network) characterized by ηi and κi to approximate Vi and Qi respectively.
J(ηi) = Est∼D[ 1
2
( Vηi(s t)− E(ai,aNi)∼(qi,qNi )[Qκi(s t, ati, a t Ni)]− log qi(a t i|st) )2 ],
J(κi) = E(st,ati,aNt i
)∼D[ 1
2
( Qiκi(s t, ait, a t Ni)− Q̂(s t, ait, a t Ni) )2 ]. (17)
where Q̂i(st, ati, a t Ni) = ri + γEst+1∼p(st+1|st,at)[Vηi(s t+1)].
Now we are ready to derive the update rule of the policy, i.e., the intention propagation network.
Remind the intention propagation network actually is a mean-field approximation of the joint-policy.
min p1,p2,...,pn KL( N∏ i=1 pi(ai|s)||π∗(a|s)).
It is the optimization over the function pi rather than certain parameters. We have proved that after M iteration of intention propagation, we have output the nearly optimal solution qi.
In the following, we will demonstrate how to update the parameter θ of the propagation network Λθ(a t|st), if we use neural network to approximate it. Again we minimize the KL divergence
min θ EstKL( N∏ i=1 qi,θ(a t i|st)||π∗(at|st))
Plug the π∗(at|st) = exp(Q(s t,at))∫
expQ(st,at)dat into the KL divergence. It is easy to see, it is equivalent to
the following the optimization problem by the definition of the KL divergence.
max θ Est [ Eat∼∏ qi,θ(ati|st)[ N∑ i=1 Qκi(s t, ati, a t Ni)− N∑ i=1 log qi,θ(a t i|st)] ] .
Thus we sample state from the replay buffer and have the loss of the policy as
J(θ) = Est∼D,at∼∏Ni=1 qi,θ(ati|st)[ N∑ i=1 log qi,θ(a t i|st)− N∑ i=1 Qκi(s t, ati, a t Ni)]. | 1. What is the main contribution of the paper in cooperative game policy generation?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its theoretical properties and computational efficiency?
3. Do you have any questions regarding the algorithm, such as the variables used, and how it is presented in the paper?
4. How does the paper's approach differ from other methods in terms of being principled, and what are the exact assumptions made?
5. What are the complete sets of parameters, and how does approximation fit into the method?
6. How does the paper's method compare to baselines in terms of computational cost, and how is this addressed in the experiments?
7. Can the authors clarify the use of "optimal" in Proposition 1, and how it differs from the standard definition of an optimal policy?
8. In Proposition 1, can the authors explain the intention of psi, and how it relates to the future accumulated reward? | Review | Review
This paper proposes a method for generating policies in cooperative games, using a neighbourhood-based factorisation of reward, and an iterative algorithm which independently updates policies based on neighbour policies and then propagates the policy to neighbours using function space embedding.
The experimental results looked promising, so there seems to be an idea here worth communicating.
The paper was very hard for me to follow. I'm not an expert in the area and wouldn't expect to follow all of the reasoning in constructing the method, but I would be expect to be able to follow some clear statements of the algorithm, or its theoretical properties (guarantees of some solution quality given certain assumptions, the parameters affecting this, etc.). Instead the main body of the paper felt like a collection of pieces that were used when developing the algorithm. I would suggest it might be easier to follow if written from the top down, instead: present a high-level overview of the idea, give a (detailed!) description of the algorithm, the experiments, and leave the derivation to the appendix.
Despite being in the appendix, the algorithm is less than half a page, and doesn't explain the variables. eta and kappa might be described elsewhere, but it would be helpful to reference where. J is a loss: which one?
One of the claimed contributions is this is principled method. However, the exact assumptions are not clear, and the chain of issues discussed throughout section 4 seems to include discussion of approximation. What makes this principled? This would seem to need a clear statemen One of the claimed contributions is this is principled method. However, the exact assumptions are not clear, and the chain of issues discussed throughout section 4 seems to include discussion of approximation. What makes this principled? This would seem to need a clear statement: what are the exact assumptions, and what precisely is the quality of the output? Is it exact? What are the complete set of parameters? Where does approximation fit in? t: what are the exact assumptions, and what precisely is the quality of the output? Is it exact? What are the complete set of parameters? Where does approximation fit in?
Another claimed contribution is computational efficiency. How does the computational cost compare to the baselines in the experiments?
Proposition 1: "The optimal policy has the form ... 1/Z exp(...)" I found the use of optimal slightly hard to follow throughout this. The usual definition of optimal policy would be a value maximising policy, which would be an argmax rather than a softmax. Following that definition, this proposition wouldn't be true, so it seems like it needs more explanation, or more careful wording. The cited PRL article (Levine 2018) seems to retain this standard use of optimal: it uses a distribution over trajectories with an equation similar to here (a softmax over accumulated trajectory rewards), and makes use of the property that trajectories corresponding to an optimal policy have maximum probability in that distribution. Can the authors clarify this use of optimal?
Proposition 1: For clarity, explain the intention of psi. Is this the future accumulated reward given the current state and selected action?
=-= comments after author discussion The authors were quite active in editing the submission, and addressing the concerns I had. I still find the paper a bit hard to follow, but none of my original concerns remain. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.